A call for prompt management

A call for prompt management
DALL-E: "You are RingmasterGPT".

In Crafting Effective Prompts with Clauses and Schemes, I observed that prompts have structure. They’re made of clauses; the types of clauses you use probably follow a pattern, which I called a scheme.

Once we recognise structure in prompts, we can reason about them on whole new levels. Here I want to introduce that clauses with in a single prompt could, and often do, express instructions at various levels of detail.

Take a common prompt scheme where the clause types might be Persona, Context, and Task. Here already, we have not only three types of clauses, but three levels expressed.

The task is the heart of the prompt, and you might expect it to be unique to a single prompt. This is the most detailed clause of the prompt.

The most general clause is the persona, where you try to limit the LLM’s scope of understanding, and hope to guide the LLM towards more relevant answers. For example, persona clauses might be,

You are RingmasterGPT, an expert Circus Ringmaster. You coordinate the show and keep the audience entertained between acts.
You are ParamedicGPT, an on-shift paramedic.

The persona clause is the most general of the three because it could be used many times to guide the LLM for any number of tasks and contexts.

The context clause in this scheme offers a mid-level of detail that frames the task for the persona. As with the persona clause, you may find yourself copying context from prompt to prompt. Example contexts might be,

There is a show currently running in the big top.
The circus is currently in winter encampment.

You could probably imagine that different personas and contexts will elicit different responses, even for the one task clause:

A lion has escaped. What do you do?

There's a world full of prompt patterns and clause types

Many prompt engineering schemes offer advice about clause types—and there are many Youtube videos on the subject—but there's still much that's missing.

Especially when it comes to organisations that want to put guardrails around the prompts that their employees submit to the company’s LLM infrastructure. For example, a business should find ways to:

mitigate against risk,

remind the LLM of the corporate mission, vision, goals, etc

express policies relevant to the prompting,

fine-tune the guardrails so that an employee cannot submit prompts that exceed the responsibilities of their roles within their own departments.

For example, two different organisational mission statements:

We are a NGO upholding animal rights.

or

We are the sharpest, shoot'n'est hunting club in the whole world.

may influence the answers to, "A lion has escaped. What do you do?

Dimensions and layers of prompts

There's a rich network of multi-level dimensions to prompting which we should explore and model. I guess you could call it meta-prompting.

The text that expresses these dimensions of prompt structure are assets that an organisation could, and should, ensure are 'folded' into the prompts that their employees submit to the company's LLM resources. A key question is, how can organisations achieve this capability of prompt clause inclusion reliably and consistently?

I asked my GPT that question. It answered:

Organisations could develop a structured prompt template system, with explicit prompt schemes and clauses types. This system would have pre-defined sections or clauses that encapsulate various organisational aspects—mission statement, compliance guidelines, department-specific objectives, and even cultural nuances. Employees would then use these templates as a base, adding task-specific details as needed. This ensures that every interaction with the AI is framed within the appropriate organisational context.
Another approach is to use of AI itself to assist in prompt creation. A secondary AI system could be designed to review and suggest modifications to prompts based on organisational guidelines, ensuring they meet the required standards before being submitted to the primary language model.
Moreover, the use of analytics and feedback mechanisms can refine this process over time. By analysing the outcomes and effectiveness of different prompts, the organisation can continuously update their templates and guidelines to better serve their needs.
Additionally, integrating AI training sessions for employees could be beneficial. These sessions would educate them on the importance of these structured prompts and how to effectively use them. This not only improves the quality of prompts but also helps in building a more AI-literate workforce.

While all of these answers sound reasonable enough, they don't really cut it, do they? A Chief Risk Officer could drive three lanes of clown cars down the big top aisle of those strategies.

All of these ideas beg questions: how do we ensure these approaches are consistently applied? How do we ensure that employees don’t simply subvert the approaches and run a circus with their own free-styling prompts?  How do we ensure that attempts to inject clauses that subvert the organisation's higher-level clauses are neutralised?

This is where prompt management comes in. We need tools in place that fold organisation-level clauses into the specific tasks the employees want to submit. And we need automated gates that evaluate a fully assembled prompt immediately before it is submitted to our LLM work-chain.

More later …


Sign up for early access to AI Wispera.


FAQ

  1. How can an organization effectively monitor and enforce structured prompt templates without stifling creativity or imposing overly restrictive guidelines?
An organization can effectively monitor and enforce structured prompt templates while fostering creativity by implementing flexible guidelines that serve as a foundation rather than a limitation. One approach is to delineate clear boundaries for employees to innovate. For instance, certain aspects of the prompts, such as those aligned with legal compliance or brand voice, can be made non-negotiable, while others related to the content's direction or style can be left open to interpretation. Organizations can also encourage creativity through periodic challenges or competitions focused on developing innovative prompts within the structured framework. Additionally, fostering a culture that values experimentation and constructive feedback can help employees feel more comfortable exploring new ideas without fearing reprimand for stepping outside the lines. Workshops and brainstorming sessions led by experienced prompt engineers can inspire new ways of thinking while adhering to the core principles of the organization's guidelines.
  1. What technologies or tools are available or under development to assist in prompt management, specifically for incorporating organization-level clauses and automated gates?
Although the article does not specify, several solutions are in development or available regarding specific technologies or tools for prompt management. These tools likely include advanced software platforms that integrate the organization's AI systems, offer template libraries, automatic clause insertion, and pre-submission prompt evaluation. Some might employ natural language processing (NLP) techniques to analyze prompts for adherence to predefined criteria, while others might use machine learning models trained to recognize and correct deviations from organizational policies. Collaboration tools may also be designed to facilitate sharing best practices and successful prompt templates among team members. Organizations interested in adopting such technologies can start by consulting with AI and software development companies specializing in corporate AI solutions and focusing on emerging AI governance and compliance startups.
  1. How can organizations train their employees in prompt crafting to align with these structured approaches while fostering an AI-literate workforce?
Organizations can develop comprehensive training programs that combine theoretical knowledge with practical exercises to train employees in prompt crafting and foster an AI-literate workforce. These programs might start with foundational courses on AI and machine learning concepts, followed by more specialized sessions focusing on prompt engineering principles, the importance of structure and clarity, and the nuances of effective communication with AI. Hands-on workshops where employees can practice crafting prompts, receive feedback, and observe live demonstrations of AI interactions can be particularly effective. To reinforce learning and encourage continuous improvement, organizations can establish a mentorship system where more experienced staff guide newcomers through becoming proficient, prompt engineers. Additionally, creating a repository of resources, such as guidelines, tips, case studies, and examples of well-crafted prompts, can provide employees with ongoing support. Encouraging a culture of curiosity and continuous learning, where employees feel empowered to seek new information and refine their skills, will be crucial to integrating these structured approaches into their daily workflows.
Mark Norman Ratjens

Mark Norman Ratjens

A grumpy old software developer agitating in the world of Generative AI.
Sydney Australia