Status
Not started
‣
What is Prompt Engineering?
‣
The Two-Phase Prompt Approach
‣
Setup and Prerequisites
Prompt Engineering Techniques
There are some basic techniques that we can use to prompt an LLM. Let's explore them.
- Zero-shot prompting, this is the most basic form of prompting. It's a single prompt requesting a response from the LLM based solely on its training data.
- Few-shot prompting, this type of prompting guides the LLM by providing one or more examples it can rely on to generate its response.
- Chain-of-thought, this type of prompting tells the LLM how to break down a problem into steps.
- Generated knowledge, to improve the response of a prompt, you can provide generated facts or knowledge additionally to your prompt.
- Least to most, like chain-of-thought, this technique is about breaking down a problem into a series of steps and then ask these steps to be performed in order.
- Self-refine, this technique is about critiquing the LLM's output and then asking it to improve.
- Maieutic prompting. What you want here is to ensure the LLM answer is correct and you ask it to explain various parts of the answer. This is a form of self-refine.
‣
4.1 Zero-Shot Prompting
‣
4.2 Few-Shot Prompting
‣
4.3 Chain-of-Thought Prompting
‣
4.4 Generated Knowledge Prompting
‣
4.5 Least-to-Most Prompting
‣
4.6 Self-Refine Prompting
‣
4.7 Maieutic Prompting
5. Summary
- Prompt engineering is about designing, testing, and refining prompts to control model outputs.
- Two steps: construct (write the prompt with context/format) and optimize (refine for better results).
- Techniques include:
- Zero-shot: no examples.
- Few-shot: provide examples.
- Chain-of-thought: step-by-step reasoning.
- Generated knowledge: add missing facts.
- Least-to-most: break tasks into stages.
- Self-refine: improve output through critique.
- Maieutic: justify and verify answers.
By applying these techniques, you move from simply trying prompts to understanding why some prompts work better than others—a crucial step in becoming skilled at working with LLMs.