On Prompting

Teams who prompt together... make better products.

Large Language Models Require the Mastery of… Language

I’m going to be honest with you. When I first heard about the “prompt engineer” role and the attached salary, I rolled my eyes so hard that I didn’t know if they would ever return to normal. I am happy to announce that I was wrong in a big way. As I started trying to generate more advanced, consistent, and usable outputs, I quickly realized there’s an art and a science to the entire process. I also saw the immediate value in developing the skill for personal use, product development, or even helping kids finish their homework.

There are a lot of guides and resources out there that dive deep into prompting. If you want to start from the ground up, I highly recommend checking out this original paper on the 26 principals of prompting and/or this fantastic Prompt Guide site.

If you want the dense information broken down in a more digestible format with some clear dots connected to product strategy and development, read on.

Imagine every member of your team, regardless of their role, had the ability to fine-tune the end-user experience in a positive way. There’s no need to sort through tons of tickets, prioritize them against your roadmap, or provide in-depth explanations over multiple conversations. In addition, you could roll out new features and experiences in less than half the time and without changing much/anything about the UI at all.

Doesn’t that sound like a wonderful world to live in?

Guess what?! We’re there.

This is one of the biggest epiphanies I had when I started building AI-powered products, and it keeps me engaged and excited. I genuinely feel like I can contribute more directly to the product and the end user's experience in a productive and efficient way. The best part? I didn’t have to learn how to code, up my skills in Figma, or manage infrastructure to do it.

Prompting essentially democratizes the product development process. Anyone from any team can easily identify an insight, write a potential solution, and test it within a short amount of time — sometimes minutes. That’s why I wanted to dive in deeper with prompting, so let’s get going.

The Essentials of Effective Prompting:

- Clarity and Specificity: Humans do better with clear requirements, and so does AI. The specificity of your input directly shapes the output, making it crucial to articulate exactly what you need from the AI. Check yourself by looking at your prompt as if you were about to send it to someone who was unfamiliar with the subject matter, who didn’t speak your language natively, or even as if it were instructions you were about to give a child.

Contextual Depth: Incorporate relevant details into your prompts to give the model sufficient background, enhancing the relevance and accuracy of its responses. Why do you need the thing you are asking for? What are you trying to accomplish? What variables did you consider when deciding this is what you needed? Despite what people may say, AI isn’t magic. It’s basically trying to make as many connections as possible to predict what should come next or to answer your question.

- Brevity and Focus: While context is key, unnecessary details can mislead the model. Striking the right balance is more art than science, requiring a deep understanding of the model's capabilities. The more you give it, the more variables it has to consider in its response. For example, telling it that you want it to adopt a “helpful” demeanor might result in it starting every conversation with “How can I help you?” or some variation of it. Remember that it already understands and has knowledge of many things, so word choice is probably more important than anything.

Prompting Strategies

With this understanding, let’s dive into the types of prompting that can be applied across various SaaS product experiences to optimize the end-user experience:

1. Zero-Shot Prompting: This method is as straightforward as it gets. Picture this: you ask a question, and the AI provides an answer based solely on its pre-trained knowledge, no examples needed. This is perfect for general queries where immediate, clear-cut answers are beneficial, like a customer service chatbot that swiftly provides solutions without needing a prior interaction history. The format of the answer doesn’t matter because it’s simply about answering the user’s question.

2. Few-Shot Prompting: This involves giving the AI a few examples to illustrate what you're looking for. It’s like showing a colleague a few instances of a completed task and then asking them to take a stab at it. Few-shot is great for more nuanced tasks where context from previous examples helps shape the response, making it ideal for tasks like generating marketing copy across different formats, filling gaps in longer form content, or ensuring the responses include specific elements e.g. a link to documentation.

3. Chain-of-Thought (CoT) Prompting: Here’s where things get interesting. CoT involves prompting the AI to "think out loud" as it tackles a problem step-by-step. It's akin to brainstorming with your team, where each step is laid out for clarity. This is especially useful in complex scenarios where the reasoning process needs to be transparent, such as debugging a user issue in a software application or planning a complicated project. Research here.

4. Generated Knowledge Prompting: Consider this the creative brainstormer of the team. This type uses what it knows to generate new insights or solutions. It’s fantastic for data-driven decision-making tools in SaaS applications, where it can suggest optimizations based on historical data analysis. Research here

5. Tree-of-Thought Prompting: This is for exploring multiple pathways and outcomes, providing a comprehensive view of scenarios, much like a strategic planning tool that forecasts several business outcomes based on different inputs. Research here.

There are many more proven methods to try out. Some require giving the AI tools to access external data sources, and some require a feedback loop or access to a vectorized database, and others even leverage other models to provide feedback on the response. New methods are tested every day and we’re really only restricted by the process power and cost of running them at scale in experiments to gather enough data to validate them in an “official” capacity.

The Prompt Design and Engineering Cycle

1. Mapping Out the Data Sources and Tasks

The first step in crafting an effective AI prompt cycle is to clearly identify and map out all the necessary data sources and the specific tasks that the AI is expected to perform. This could range from processing user input, fetching data from databases, integrating third-party services, to performing complex calculations. Understanding where each piece of data comes from and what the AI needs to accomplish with it is fundamental. For instance, if the AI needs to recommend products based on user behavior, the data sources would include user activity logs, product databases, and perhaps external trend data.

2. Determining the Players or Specialists Needed

Once the tasks are mapped out, it’s essential to identify the specialists aka agents or "players" required to execute these tasks. This could involve data scientists for model training, backend developers for API integrations, or AI ethicists to ensure the AI operates within ethical boundaries. Each role needs to be clearly defined so that every team member knows their responsibilities and how they contribute to the AI's operations.

3. Determining the Chronology of Tasks and Dependencies

After identifying the tasks and specialists, the next step is to determine the order in which these tasks should be performed and understand the dependencies between them. This involves creating a detailed workflow or a diagram that illustrates how tasks are interconnected and the sequence that maximizes efficiency and effectiveness. For example, user data processing might need to occur before personalized content can be generated, establishing a dependency that dictates the task order.

4. Designing the Handoff of Information Between Tasks

Effective communication between different tasks is crucial for a seamless operation. This step involves designing the handoff of information, including specifying the format and structure of data as it moves from one task to another. It’s important to standardize data formats across different stages to minimize errors and ensure that each component of the system can interpret the data correctly. For instance, if one task outputs user segmentation data, it should be in a format that the subsequent marketing strategy formulation task can readily use without needing extensive reformatting.

5. Testing and Evaluating Responses and Outputs

The final step in the prompt design and engineering cycle is rigorously testing and evaluating the AI’s responses and outputs. The nice thing about AI is that you can generally test the interactions at high volume in short cycles. In the academic world, you would set up very advanced testing to run across various parameters. In software development, you need to test for the unpredictability of humans, so it’s a creative exercise that anyone on your team can do by pasting your prompt into something like Playground and running through a few interactions.

When you do:

  • Check the response when no input is provided

  • Check the response for the same input multiple times

  • Test for the same scenario written in different ways

  • Test for critical scenarios poorly written

  • Test for things that wouldn’t make sense e.g. the weirdest support stories your team has to share