
Prompt engineering is one of the friendlier sections of the Generative AI Leader exam. It does not require ML math or deep infrastructure knowledge. It rewards understanding what each technique actually does and when to reach for it. In this post I will walk through every prompting style on the exam and the small details that tend to show up in the questions.
The framing I keep coming back to is this: every prompt engineering technique is just a different way of adding context. That single idea makes the whole topic easier to hold in your head, because once you stop thinking of these as unrelated tricks and start seeing them as different layers of context, the exam questions get a lot more obvious.
Zero-shot prompting is the simplest case. You ask the model to perform a task without giving it any examples. There are zero examples in the prompt, hence the name. You are relying entirely on the model's pre-trained knowledge.
This works for common, straightforward tasks the model has seen plenty of during training. Summarization, translation, and answering simple questions all fit here. If you paste a customer review and ask for a summary, you do not need to show the model what a summary looks like. It already understands the task conceptually.
The simplicity is both the strength and the weakness. Zero-shot is fast and direct, but it does not guide the model toward your specific format or style. If you need a particular structure or voice, zero-shot will usually miss it.
One-shot is exactly what it sounds like. You give the model one example before asking it to complete the task. You reach for it when you want to demonstrate a specific format, style, or pattern and a single clear example is enough to establish the template.
The classic case is a product description. You show the model how to write a description for one product, with a particular structure of name, description, and features. Then you ask it to write a description for a different product. You are not telling the model to use three features or to end with a specific phrase. You are showing the pattern and letting the model infer the rules. Show, do not tell.
Few-shot is where you provide multiple examples, usually two to five, so the model can pick up a more complex pattern. This is the sweet spot for a lot of real-world work.
The reason it tends to outperform one-shot is that diverse examples let the model see edge cases and variations. If you are converting formal documentation into a casual brand voice, three examples that vary in context and phrasing will calibrate the model to your specific style far better than one example or a paragraph of instructions ever could. The model picks up not just the target vocabulary but the decision logic behind it.
For exam purposes, remember the count: usually two to five examples. And remember that few-shot is the technique you reach for when the task is complex or domain-specific and you want reliable results.
Role-based prompting means you instruct the model to adopt a specific identity or persona. Instead of asking a generic question, you frame the request with something like "You are a financial advisor explaining compound interest to a teenager." That single sentence shapes the tone, the level of detail, and the perspective of the response.
The reason role-based prompting works is that a financial advisor speaks differently than a journalist, who speaks differently than a professor. The role creates an implicit framework for everything that follows. It also combines well with other techniques. You can assign a role and then provide few-shot examples, and the two amplify each other.
Prompt chaining is breaking a complex task into smaller sequential prompts that build on each other. Rather than asking for everything at once, you iterate. Step one might be "Create a marketing tagline for a new water bottle." The model produces something like "Pure hydration, zero impact." Step two then takes that output and asks the model to write a social media caption that uses it.
The benefit is that you can check and refine the output at each stage. If step one produces a weak tagline, you fix it before it contaminates everything downstream. For complex content workflows or multi-stage problem-solving, chaining gives you control that a single mega-prompt cannot.
Chain-of-thought, or CoT, prompting guides the model to generate intermediate reasoning steps before producing a final answer. The standard trigger is appending "Think step by step" to the prompt.
The example I keep in mind is a simple math problem. A store sells apples for $1.20 each. Maria buys 7 apples and pays with a $10 bill. How much change does she get? Without CoT, a model might just blurt out a wrong answer like $1.40 with no reasoning. Add "Think step by step" and the model works through it: seven apples at $1.20 is $8.40, Maria paid $10, so change is $1.60. Correct, and the reasoning is visible.
The important point is that making the reasoning visible does not just help you audit the answer. It actually improves accuracy, because the model is less likely to jump to a wrong conclusion when it has to justify each step first.
ReAct stands for Reason and Act. It combines chain-of-thought reasoning with external actions, so the model is not limited to what it already knows. It can reach out to external tools as part of working through a problem.
The loop has three phases. The model first describes its logic in a Thought step. Then it executes a task in an Action step, which could be a web search or an API call. Then it processes the result in an Observation step. It repeats this loop until it reaches a Final Answer.
The detail that often shows up on the Generative AI Leader exam is that ReAct is usually configured through the system prompt, not the user prompt. The developer sets it up in advance, defining which tools are available and instructing the model to follow the strict Thought, Action, Observation, Final Answer format for every request. That structure is what makes the reasoning and tool use traceable and controllable.
Metaprompting flips the usual dynamic. Instead of writing the detailed instructions yourself, you describe your goal at a high level and ask the model to generate the prompt for you. You are essentially prompting the AI to prompt itself.
A typical meta-prompt looks like "I am building a customer service bot for a shoe company. Act as an expert prompt engineer. Write a highly detailed system prompt that instructs an AI how to handle refund requests politely, check for a 30-day return window, and offer a discount code if the customer is unhappy." The model responds with a fully structured system prompt that you can then deploy. You did not have to write it yourself. You just described what you needed and let the model generate the high-quality instructions.
Here is the part that ties everything together. All of these techniques are doing the same underlying thing. They are adding different levels and types of context to the prompt. Because they all operate on the same principle, they combine cleanly.
A combined prompt might start with a role like "You are a history professor with decades of experience," then add task and audience constraints like "Explain the start of World War 2 to a freshman class using short sentences and chronological order," then introduce few-shot examples with "Here are some examples" followed by the examples themselves. Each technique is pulling its own weight. The role shapes tone and expertise. The format instructions constrain output structure. The few-shot examples calibrate exactly what kind of response should come back.
If a Generative AI Leader exam question describes a single prompt that includes a persona plus examples plus a step-by-step instruction, the right answer is usually that the prompt combines multiple techniques to layer context. That framing of "different layers of context" is what I would lock in before exam day.
My Generative AI Leader course covers all eight prompt engineering techniques in depth, with worked examples for each one, alongside the rest of the foundational material you need for the exam.