Complete Beginner's Guide to Prompt Engineering
Table of Contents
- What is Prompt Engineering?
- Why Prompt Engineering Matters
- The Two-Phase Approach
- Setup and Prerequisites
- Core Techniques with Examples
- Advanced Control Methods
- Best Practices and Troubleshooting
- Practice Exercises
What is Prompt Engineering?
Prompt engineering is the art and science of crafting instructions (called "prompts") that guide AI language models to produce useful, accurate, and relevant responses.
Key Concepts for Beginners:
- Prompt: The text instruction you give to an AI model
- Response: The AI's output based on your prompt
- Context: Background information that helps the AI understand your request
- Emergent Property: Prompting capabilities that weren't explicitly programmed but emerged from the model's training
Simple Analogy:
Think of prompt engineering like giving directions to a very smart but literal-minded assistant. The clearer and more specific your instructions, the better results you'll get.
Example:
- ❌ Bad prompt: "Write something about cars"
- ✅ Good prompt: "Write a 200-word beginner's guide explaining the difference between electric and hybrid cars, focusing on cost and environmental impact"
Why Prompt Engineering Matters
Without Good Prompting:
- Responses may be too vague or too detailed
- Output might miss your intended focus
- Format could be inconsistent
- Information might be irrelevant to your needs
With Good Prompting:
- Controlled, predictable outputs
- Consistent formatting
- Targeted, relevant information
- Better accuracy and usefulness
Real-World Impact:
A poorly crafted prompt might waste hours of back-and-forth clarification, while a well-engineered prompt gets you exactly what you need on the first try.
The Two-Phase Approach
Phase 1: Construction
Goal: Build a clear, complete prompt
Elements to include:
- Context: What background does the AI need?
- Task: What specifically do you want?
- Format: How should the response be structured?
- Constraints: Any limitations or requirements?
- Audience: Who is this for?
Phase 2: Optimization
Goal: Refine and improve through testing
Methods:
- Test with different inputs
- Add examples if needed
- Clarify ambiguous parts
- Adjust tone and style
- Fine-tune constraints
Example of Both Phases:
Phase 1 (Construction):
You are a financial advisor. Explain cryptocurrency to a 65-year-old retiree who is considering investing $5,000 of their savings. Focus on risks and be honest about volatility. Write 3 paragraphs.
Phase 2 (Optimization - after testing):
You are a certified financial advisor speaking to a 65-year-old retiree with $100,000 in total savings who is considering investing $5,000 in cryptocurrency for the first time.
Task: Explain cryptocurrency investing with these requirements:
- Focus on key risks (volatility, scams, technology complexity)
- Use simple language, avoid jargon
- Be honest about potential for loss
- Mention safer alternatives briefly
- Write exactly 3 paragraphs of 4-5 sentences each
Tone: Respectful, educational, protective (not dismissive)
Setup and Prerequisites
Installing Required Libraries
# Install the OpenAI Python library
pip install --upgrade openai
# Optional: For environment variable management
pip install python-dotenv
Setting Up Your API Key
Method 1: Environment Variable (Recommended)
# On Mac/Linux
export OPENAI_API_KEY="sk-your-key-here"
# On Windows
set OPENAI_API_KEY=sk-your-key-here
Method 2: Using a .env file
# Create a .env file in your project directory
# Add this line to the .env file:
OPENAI_API_KEY=sk-your-key-here
Basic Python Setup
import os
from openai import OpenAI
# Initialize the client
client = OpenAI()
# The client automatically looks for OPENAI_API_KEY in environment variables
# Choose your model (these are common options)
MODEL = "gpt-4o-mini" # Good balance of capability and cost
# MODEL = "gpt-4o" # More capable but more expensive
# MODEL = "gpt-3.5-turbo" # Faster and cheaper but less capable
Understanding API Options
Two Main API Types:
- Responses API (Newer, Simpler)
- Unified interface
- Easier to use for beginners
- Good default choice
- Chat Completions API (Traditional)
- More control over conversation flow
- Required for reproducible outputs (using
seedparameter) - Better for complex conversations
Core Techniques with Examples
1. Zero-Shot Prompting
What it is: Asking the AI to perform a task without providing examples, relying solely on its training knowledge.
When to use:
- Simple, straightforward tasks
- When the task is common and well-understood
- Quick one-off requests
Structure:
[Clear instruction] + [Context if needed] + [Format specification]
Basic Example:
def zero_shot_example():
prompt = "Summarize the main causes of World War I in 3 bullet points."
response = client.responses.create(
model=MODEL,
input=prompt
)
print("Zero-shot response:")
print(response.output_text)
return response.output_text
# Run the example
zero_shot_example()
Advanced Business Example:
def business_zero_shot():
prompt = """
You are a business consultant. A small bakery wants to increase revenue by 20% over the next 6 months.
Provide 5 specific, actionable strategies. Format as a numbered list with brief explanations.
Focus on realistic options for a business with 3 employees and limited budget.
"""
response = client.responses.create(
model=MODEL,
input=prompt,
temperature=0.7 # Some creativity, but not too much
)
print("Business consultation (zero-shot):")
print(response.output_text)
return response.output_text
business_zero_shot()
Pro Tips for Zero-Shot:
- Be specific about the output format
- Include relevant constraints
- Specify the audience or use case
- Set the context clearly
2. Few-Shot Prompting
What it is: Providing 1-3 examples to teach the AI the pattern, format, or style you want.
When to use:
- When you need consistent formatting
- For classification tasks
- When the desired style isn't obvious
- For teaching specific patterns
Structure:
[Task description] + [Example 1] + [Example 2] + [Example 3] + [Your actual request]
Classification Example:
def few_shot_classification():
prompt = """
Classify each customer feedback as POSITIVE, NEGATIVE, or NEUTRAL.
Examples:
Feedback: "The service was amazing and the food was delicious!"
Classification: POSITIVE
Feedback: "It was okay, nothing special but not bad either."
Classification: NEUTRAL
Feedback: "Worst experience ever. The staff was rude and the food was cold."
Classification: NEGATIVE
Now classify these:
1. "Great product, fast delivery, will buy again!"
2. "The item arrived damaged and customer service was unhelpful."
3. "Average quality, does what it's supposed to do."
"""
response = client.responses.create(
model=MODEL,
input=prompt,
temperature=0.1 # Low temperature for consistency
)
print("Few-shot classification:")
print(response.output_text)
return response.output_text
few_shot_classification()
Creative Writing Style Example:
def few_shot_writing_style():
prompt = """
Write product descriptions in an enthusiastic, benefit-focused style.
Examples:
Product: Wireless Earbuds
Description: Transform your daily commute into a concert experience! These crystal-clear wireless earbuds deliver studio-quality sound that makes every song feel like a personal performance. Say goodbye to tangled wires and hello to freedom!
Product: Coffee Maker
Description: Wake up to the perfect cup every morning! This smart coffee maker learns your preferences and brews café-quality coffee at exactly the right temperature. Your kitchen will smell like your favorite coffee shop, and your mornings will never be the same!
Now write a description for:
Product: Ergonomic Office Chair
"""
response = client.responses.create(
model=MODEL,
input=prompt,
temperature=0.8 # Higher creativity for writing
)
print("Few-shot writing style:")
print(response.output_text)
return response.output_text
few_shot_writing_style()
Pro Tips for Few-Shot:
- Use 1-3 examples (more can confuse the model)
- Make sure examples are clear and diverse
- Keep examples concise but complete
- Ensure examples match your desired quality level
3. Chain-of-Thought Prompting
What it is: Asking the AI to show its reasoning steps, leading to more accurate and explainable results.
When to use:
- Complex problem-solving
- Math or logical reasoning
- When you need to verify the thinking process
- Multi-step tasks
Structure:
[Problem] + "Think step by step" or "Show your work" + [Specific steps if needed]
Mathematical Reasoning:
def chain_of_thought_math():
prompt = """
Solve this problem step by step:
A company's revenue grew by 25% in Year 1, then decreased by 10% in Year 2.
If they started with $100,000 revenue, what's their final revenue?
Show each calculation step and explain your reasoning.
"""
response = client.responses.create(
model=MODEL,
input=prompt,
temperature=0.1 # Low temperature for accuracy
)
print("Chain-of-thought math problem:")
print(response.output_text)
return response.output_text
chain_of_thought_math()
Business Analysis Example:
def chain_of_thought_business():
prompt = """
A restaurant is losing customers. Here's the data:
- Customer visits down 30% over 3 months
- Average order value unchanged
- Two new competitors opened nearby
- Recent negative review about slow service
- Food costs increased 15%
Analyze this step by step:
1. Identify the main problems
2. Determine which issues are most critical
3. Explain the relationship between problems
4. Recommend the top 2 solutions with reasoning
Think through each step carefully.
"""
response = client.responses.create(
model=MODEL,
input=prompt,
temperature=0.3
)
print("Chain-of-thought business analysis:")
print(response.output_text)
return response.output_text
chain_of_thought_business()
Pro Tips for Chain-of-Thought:
- Explicitly ask for step-by-step reasoning
- Break complex problems into smaller parts
- Ask for explanations of each step
- Use lower temperature for logical consistency
4. Generated Knowledge Prompting
What it is: First providing or generating relevant facts, then using those facts to answer the main question.
When to use:
- When you have specific information not in the AI's training
- To reduce hallucinations
- When accuracy is critical
- For domain-specific knowledge
Structure:
[Provide relevant facts/context] + [Main task based on those facts]
Company Policy Example:
def generated_knowledge_policy():
company_policy = """
XYZ Company Remote Work Policy:
- Employees can work remotely up to 3 days per week
- Must be in office for client meetings and team meetings
- Home office stipend: $500 annually for equipment
- Internet reimbursement: $50/month
- Core collaboration hours: 10 AM - 3 PM in company timezone
- Approval required from direct manager
"""
prompt = f"""
Based on this company policy:
{company_policy}
Question: Sarah wants to work from home 4 days per week and asks about getting a new desk chair that costs $400. She also wants to know if she needs to be available at 9 AM for a client call.
Provide a clear response addressing each of Sarah's questions, citing the specific policy points.
"""
response = client.responses.create(
model=MODEL,
input=prompt,
temperature=0.2
)
print("Generated knowledge (policy-based):")
print(response.output_text)
return response.output_text
generated_knowledge_policy()
Technical Documentation Example:
def generated_knowledge_technical():
api_documentation = """
API Endpoint: /api/users
Methods: GET, POST, PUT, DELETE
Authentication: Bearer token required
Rate limit: 100 requests per hour
GET /api/users - Returns list of users (admin only)
POST /api/users - Creates new user
PUT /api/users/{id} - Updates user (user can only update own profile)
DELETE /api/users/{id} - Deletes user (admin only)
Required fields for POST: email, password, firstName, lastName
Optional fields: phone, department
"""
prompt = f"""
Based on this API documentation:
{api_documentation}
A developer asks: "I want to create a new user account and then immediately update their phone number. I'm not an admin. What API calls do I need to make and what should I be aware of?"
Provide a step-by-step answer with the exact API calls needed, any potential issues, and what permissions are required.
"""
response = client.responses.create(
model=MODEL,
input=prompt,
temperature=0.1
)
print("Generated knowledge (technical):")
print(response.output_text)
return response.output_text
generated_knowledge_technical()
Pro Tips for Generated Knowledge:
- Clearly separate the knowledge section from the question
- Make sure the knowledge is accurate and complete
- Reference specific parts of the knowledge in your question
- Use this technique to prevent hallucinations
5. Least-to-Most Prompting
What it is: Breaking down complex problems into smaller, ordered sub-problems that build upon each other.
When to use:
- Very complex, multi-step problems
- When the solution path isn't obvious
- For systematic problem-solving
- Educational explanations
Structure:
[Break down the problem] + [Solve each step in order] + [Combine the results]
Project Planning Example:
def least_to_most_project():
prompt = """
Help me plan a company website redesign project. Break this down into steps and solve each step:
Step 1: What are the key phases of a website redesign project? List them in logical order.
Step 2: For the most critical phase you identified, what are the specific tasks and who should be involved?
Step 3: What potential risks could derail the project in that critical phase, and how can they be mitigated?
Step 4: Create a 2-week action plan for getting started, based on your analysis above.
Complete each step before moving to the next, and reference your previous answers.
"""
response = client.responses.create(
model=MODEL,
input=prompt,
temperature=0.4
)
print("Least-to-most project planning:")
print(response.output_text)
return response.output_text
least_to_most_project()
Learning Complex Topic Example:
def least_to_most_learning():
prompt = """
I want to understand machine learning well enough to make business decisions about AI projects. Break this learning path into steps:
Step 1: What are the absolute basics I need to understand first? (Assume I know basic statistics)
Step 2: Based on Step 1, what real-world examples best illustrate these basics for business applications?
Step 3: What are the key business considerations and limitations I should understand about ML projects?
Step 4: Given Steps 1-3, create a 30-day learning plan with specific resources and time allocations.
Work through each step systematically, building on the previous steps.
"""
response = client.responses.create(
model=MODEL,
input=prompt,
temperature=0.5
)
print("Least-to-most learning plan:")
print(response.output_text)
return response.output_text
least_to_most_learning()
Pro Tips for Least-to-Most:
- Clearly number or label each step
- Make sure steps build logically on each other
- Ask the AI to reference previous steps
- Use for complex problems that benefit from systematic breakdown
6. Self-Refine Prompting
What it is: Having the AI critique and improve its own output through iteration.
When to use:
- When quality is more important than speed
- For writing tasks that benefit from revision
- When you want to explore multiple approaches
- For learning and improvement
Structure:
[Initial request] → [Get first output] → [Ask for critique and improvement] → [Get refined output]
Writing Improvement Example:
def self_refine_writing():
# Step 1: Initial draft
initial_prompt = """
Write a professional email to a client explaining that their project will be delayed by 2 weeks due to unexpected technical challenges. The tone should be apologetic but confident.
"""
initial_response = client.responses.create(
model=MODEL,
input=initial_prompt,
temperature=0.6
)
draft_email = initial_response.output_text
print("Initial draft:")
print(draft_email)
print("\n" + "="*50 + "\n")
# Step 2: Critique and refine
refine_prompt = f"""
Here's a draft email to a client about a project delay:
---
{draft_email}
---
Please:
1. Critique this email for tone, clarity, and professionalism
2. Identify any missing elements that would make it more effective
3. Rewrite the email incorporating your improvements
Focus on: taking responsibility, showing understanding of impact, providing reassurance about quality, and offering specific next steps.
"""
refined_response = client.responses.create(
model=MODEL,
input=refine_prompt,
temperature=0.4
)
print("Critique and refined version:")
print(refined_response.output_text)
return draft_email, refined_response.output_text
self_refine_writing()
Business Strategy Refinement:
def self_refine_strategy():
# Step 1: Initial strategy
strategy_prompt = """
Create a customer retention strategy for a subscription software company experiencing 15% monthly churn. Provide 5 key initiatives.
"""
initial_strategy = client.responses.create(
model=MODEL,
input=strategy_prompt,
temperature=0.7
).output_text
print("Initial strategy:")
print(initial_strategy)
print("\n" + "="*50 + "\n")
# Step 2: Refine based on constraints
refine_prompt = f"""
Here's a customer retention strategy:
---
{initial_strategy}
---
Now refine this strategy considering these constraints:
- Limited budget ($50,000 for 6 months)
- Small team (2 people can work on retention)
- Need to show results within 90 days
- Technical implementation should be minimal
Critique the original strategy and provide an improved version that's more realistic and actionable given these constraints.
"""
refined_strategy = client.responses.create(
model=MODEL,
input=refine_prompt,
temperature=0.3
).output_text
print("Refined strategy:")
print(refined_strategy)
return refined_strategy
self_refine_strategy()
Pro Tips for Self-Refine:
- Give specific criteria for improvement
- Use lower temperature in refinement step
- Can iterate multiple times for best results
- Particularly effective for creative and strategic tasks
7. Maieutic Prompting (Socratic Method)
What it is: Forcing the AI to explain and justify its reasoning, similar to Socratic questioning.
When to use:
- When accuracy is critical
- To verify reasoning process
- For educational purposes
- To catch potential errors or bias
Structure:
[Question] + [Request for explanation] + [Follow-up questions about the reasoning]
Decision Verification Example:
def maieutic_decision():
prompt = """
Question: Should a small business with $10,000 in monthly profit invest in a $5,000 marketing campaign or save the money for emergency funds?
Provide your recommendation, then:
1. Explain the key factors you considered in making this decision
2. What assumptions are you making about this business?
3. What additional information would make you change your recommendation?
4. What are the potential risks of your recommended choice?
5. How confident are you in this recommendation on a scale of 1-10 and why?
"""
response = client.responses.create(
model=MODEL,
input=prompt,
temperature=0.2
)
print("Maieutic decision analysis:")
print(response.output_text)
return response.output_text
maieutic_decision()
Technical Problem Solving:
def maieutic_technical():
prompt = """
A website is loading slowly (8+ seconds). The diagnosis suggests it's a database issue.
Recommended solution: Add database indexing to the most queried tables.
Now justify this recommendation:
1. Why is database indexing the most likely solution for slow loading times?
2. What evidence would confirm that database queries are actually the bottleneck?
3. What could go wrong if we implement indexing incorrectly?
4. What alternative solutions should we consider if indexing doesn't work?
5. How would you measure if the solution is successful?
Be specific and explain your reasoning for each point.
"""
response = client.responses.create(
model=MODEL,
input=prompt,
temperature=0.1
)
print("Maieutic technical analysis:")
print(response.output_text)
return response.output_text
maieutic_technical()
Pro Tips for Maieutic:
- Ask "why" and "how do you know" questions
- Request specific evidence or reasoning
- Challenge assumptions explicitly
- Use for high-stakes decisions
- Great for learning and understanding
Advanced Control Methods
Temperature and Creativity Control
Temperature controls randomness in responses:
- 0.0-0.2: Very focused, consistent, deterministic
- 0.3-0.7: Balanced creativity and consistency
- 0.8-1.0: High creativity, more variation
def temperature_comparison():
base_prompt = "Write a tagline for a new coffee shop"
temperatures = [0.1, 0.5, 0.9]
for temp in temperatures:
response = client.responses.create(
model=MODEL,
input=base_prompt,
temperature=temp
)
print(f"Temperature {temp}: {response.output_text}")
print()
temperature_comparison()
Reproducible Outputs with Seed
For consistent results across runs, use the Chat Completions API with a seed:
def reproducible_output():
messages = [
{"role": "user", "content": "List 5 creative names for a pet dog"}
]
# This should produce the same output each time (mostly)
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=messages,
temperature=0,
seed=12345 # Fixed seed for reproducibility
)
print("Reproducible output:")
print(response.choices[0].message.content)
print(f"System fingerprint: {response.system_fingerprint}")
return response
reproducible_output()
Structured JSON Output
For when you need consistent, parseable responses:
def structured_json_output():
prompt = """
Analyze this customer feedback: "The product is great but shipping was slow and expensive."
Return ONLY a valid JSON object with this structure:
{
"overall_sentiment": "positive|negative|neutral",
"product_feedback": "string describing product sentiment",
"shipping_feedback": "string describing shipping sentiment",
"key_issues": ["array", "of", "main", "problems"],
"confidence_score": number_between_0_and_1
}
Return only the JSON, no other text.
"""
response = client.responses.create(
model=MODEL,
input=prompt,
temperature=0.1
)
print("Structured JSON output:")
print(response.output_text)
# Parse the JSON to verify it's valid
import json
try:
parsed = json.loads(response.output_text)
print("\n✅ Valid JSON structure")
print(f"Sentiment: {parsed['overall_sentiment']}")
print(f"Confidence: {parsed['confidence_score']}")
except json.JSONDecodeError:
print("❌ Invalid JSON structure")
return response.output_text
structured_json_output()
Best Practices and Troubleshooting
Common Problems and Solutions
Problem 1: Output is too generic or vague
Bad Example:
prompt = "Write about marketing"
Good Example:
prompt = """
You are a marketing consultant for B2B SaaS companies.
Write a 300-word guide on email marketing specifically for software startups with under 100 customers.
Focus on practical tactics they can implement immediately with limited budget.
Include 3 specific examples and format as bullet points with brief explanations.
"""
Problem 2: Inconsistent formatting
Solution: Use few-shot prompting
def fix_inconsistent_formatting():
prompt = """
Format product reviews in a consistent structure:
Example:
**Product:** iPhone 14 Pro
**Rating:** ⭐⭐⭐⭐⭐ (5/5)
**Pros:** Excellent camera, fast performance, premium build
**Cons:** Expensive, battery could be better
**Verdict:** Great for power users willing to pay premium
Now format this review: "The Samsung Galaxy is really good, camera is nice and it's fast, but costs too much and battery dies quickly, I'd give it 4 stars, good for most people but maybe not worth the high price"
"""
response = client.responses.create(
model=MODEL,
input=prompt,
temperature=0.2
)
print(response.output_text)
fix_inconsistent_formatting()
Problem 3: AI "hallucinates" or invents facts
Solution: Use generated knowledge prompting
def prevent_hallucination():
# Provide actual facts first
facts = """
Company: TechStart Inc.
Founded: 2020
Employees: 25
Revenue 2023: $2.1M
Main product: Project management software
Key clients: 150+ small businesses
"""
prompt = f"""
Based ONLY on these verified facts:
{facts}
Write a brief company overview for their website's About page.
Do not add any information not provided in the facts above.
If you need additional information to complete a section, note what's missing.
"""
response = client.responses.create(
model=MODEL,
input=prompt,
temperature=0.1
)
print(response.output_text)
prevent_hallucination()
Universal Prompt Template
Use this template for consistently good results:
def universal_prompt_template(
role="helpful assistant",
context="",
task="",
constraints="",
format_request="",
examples="",
tone="professional and clear"
):
template = f"""
You are a {role}.
{f"Context: {context}" if context else ""}
Task: {task}
{f"Constraints: {constraints}" if constraints else ""}
{f"Format: {format_request}" if format_request else ""}
{f"Examples: {examples}" if examples else ""}
Tone: {tone}
"""
return template.strip()
# Example usage:
prompt = universal_prompt_template(
role="customer service expert",
context="A customer is upset about a delayed shipment of their wedding dress, ordered 2 months ago for a wedding next week",
task="Write a response email that addresses their concern, takes responsibility, and offers solutions",
constraints="Keep under 200 words, be empathetic but professional",
format_request="Email format with subject line",
tone="apologetic and solution-focused"
)
print("Generated prompt:")
print(prompt)
print("\n" + "="*50 + "\n")
response = client.responses.create(
model=MODEL,
input=prompt,
temperature=0.4
)
print("Response:")
print(response.output_text)
Practice Exercises
Exercise 1: Zero-Shot to Few-Shot Improvement
Task: Improve this zero-shot prompt using few-shot technique
def exercise_1():
# Original zero-shot prompt (try this first)
zero_shot = "Classify the urgency of this customer support ticket: 'My account is locked and I can't access my files for tomorrow's presentation'"
# Your improved few-shot version here:
few_shot = """
Classify customer support tickets as HIGH, MEDIUM, or LOW urgency:
Examples:
"Website is down for all users" → HIGH
"Question about billing cycle" → LOW
"Can't access account before important meeting" → HIGH
"Feature request for mobile app" → LOW
Now classify: "My account is locked and I can't access my files for tomorrow's presentation"
"""
# Test both versions
for prompt_type, prompt in [("Zero-shot", zero_shot), ("Few-shot", few_shot)]:
response = client.responses.create(
model=MODEL,
input=prompt,
temperature=0.1
)
print(f"{prompt_type} result:")
print(response.output_text)
print()
exercise_1()
Exercise 2: Add Chain-of-Thought Reasoning
Task: Take this prompt and add chain-of-thought reasoning
def exercise_2():
# Original prompt
basic_prompt = "Should a startup spend $10,000 on Google Ads or hire a content writer for 6 months?"
# Your chain-of-thought version:
cot_prompt = """
A startup needs to decide: spend $10,000 on Google Ads or hire a content writer for 6 months?
Think through this decision step by step:
1. What are the immediate vs long-term benefits of each option?
2. What factors about the startup would influence this decision (stage, audience, current marketing)?
3. What are the risks and potential ROI of each approach?
4. Based on your analysis, what would you recommend and why?
Show your reasoning for each step.
"""
# Test both versions
for prompt_type, prompt in [("Basic", basic_prompt), ("Chain-of-thought", cot_prompt)]:
response = client.responses.create(
model=MODEL,
input=prompt,
temperature=0.3
)
print(f"{prompt_type} result:")
print(response.output_text)
print("\n" + "="*40 + "\n")
exercise_2()
Exercise 3: Self-Refine a Business Email
Task: Use self-refine to improve a business communication
def exercise_3():
# Step 1: Generate initial email
initial_prompt = """
Write an email to your team announcing that the office will be closed for renovations for 2 weeks,
and everyone will work from home during this period.
"""
initial_email = client.responses.create(
model=MODEL,
input=initial_prompt,
temperature=0.6
).output_text
print("Initial email:")
print(initial_email)
print("\n" + "="*50 + "\n")
# Step 2: Self-refine
refine_prompt = f"""
Here's an email draft about office renovations:
---
{initial_email}
---
Improve this email by:
1. Making it more specific about dates and logistics
2. Addressing potential employee concerns proactively
3. Adding a positive, exciting tone about the improvements
4. Including clear next steps and contact information
5. Ensuring it sounds leadership-appropriate
First, critique the original email, then provide the improved version.
"""
refined_response = client.responses.create(
model=MODEL,
input=refine_prompt,
temperature=0.4
).output_text
print("Self-refined version:")
print(refined_response)
exercise_3()
Exercise 4: Create a Multi-Technique Prompt
Task: Combine multiple techniques for a complex business scenario
def exercise_4_complex_scenario():
prompt = """
You are a business consultant with expertise in retail operations.
CONTEXT (Generated Knowledge):
- Small boutique clothing store
- Monthly revenue: $25,000
- Monthly expenses: $20,000
- 3 employees including owner
- Located in downtown area
- Foot traffic down 40% since pandemic
- Strong Instagram following (5,000 followers)
- Inventory: mix of local and mainstream brands
EXAMPLES of successful strategies (Few-shot):
Similar store A: Added online sales + curbside pickup → 30% revenue increase
Similar store B: Partnered with local influencers → 25% new customer growth
Similar store C: Started styling consultation service → $200/session additional revenue
TASK (Chain-of-thought):
Help this store owner create a recovery plan. Think through this step-by-step:
1. ANALYZE: What are the biggest challenges and opportunities based on the data?
2. PRIORITIZE: Which problems should be addressed first given limited resources?
3. STRATEGIZE: What specific actions would have the highest impact-to-effort ratio?
4. PLAN: Create a 90-day implementation timeline with measurable goals
CONSTRAINTS:
- Budget limit: $3,000 for new initiatives
- Owner can dedicate 10 hours/week to new projects
- Must maintain current service quality
- Solutions should leverage existing Instagram following
FORMAT:
Structure your response with clear headings for each step, specific action items, and success metrics.
Work through each step systematically, showing your reasoning.
"""
response = client.responses.create(
model=MODEL,
input=prompt,
temperature=0.4
)
print("Multi-technique business consultation:")
print(response.output_text)
return response.output_text
exercise_4_complex_scenario()
Advanced Tips for Production Use
Error Handling and Robustness
def robust_api_call(prompt, max_retries=3, fallback_model="gpt-3.5-turbo"):
"""
Robust API call with error handling and retries
"""
import time
for attempt in range(max_retries):
try:
response = client.responses.create(
model=MODEL,
input=prompt,
temperature=0.3
)
# Validate response
if response.output_text and len(response.output_text.strip()) > 0:
return response.output_text
else:
raise ValueError("Empty response received")
except Exception as e:
print(f"Attempt {attempt + 1} failed: {str(e)}")
if attempt == max_retries - 1:
# Last attempt - try with fallback model
try:
response = client.chat.completions.create(
model=fallback_model,
messages=[{"role": "user", "content": prompt}],
temperature=0.3
)
return response.choices[0].message.content
except Exception as e2:
print(f"Fallback also failed: {str(e2)}")
return f"Error: Unable to process request after {max_retries} attempts"
# Wait before retry
time.sleep(2 ** attempt) # Exponential backoff
return "Error: Max retries exceeded"
# Example usage
result = robust_api_call("Summarize the benefits of renewable energy in 3 points")
print(result)
Prompt Validation and Testing
def validate_prompt_quality(prompt):
"""
Simple prompt quality checker
"""
issues = []
# Check for basic structure
if len(prompt.strip()) < 10:
issues.append("Prompt too short - add more context")
if not any(word in prompt.lower() for word in ['write', 'create', 'analyze', 'explain', 'generate', 'list', 'compare']):
issues.append("No clear action verb - specify what you want the AI to do")
# Check for format specification
if not any(word in prompt.lower() for word in ['format', 'structure', 'list', 'paragraph', 'bullet', 'json']):
issues.append("Consider specifying desired output format")
# Check for constraints
if len(prompt) > 50 and not any(word in prompt.lower() for word in ['limit', 'maximum', 'minimum', 'exactly', 'about', 'approximately']):
issues.append("Consider adding length or scope constraints")
# Check for context
if not any(word in prompt.lower() for word in ['context', 'background', 'assume', 'given', 'for', 'audience']):
issues.append("Consider adding context about audience or background")
if issues:
print("Prompt improvement suggestions:")
for i, issue in enumerate(issues, 1):
print(f"{i}. {issue}")
else:
print("✅ Prompt looks well-structured!")
return len(issues) == 0
# Test the validator
test_prompts = [
"Write about dogs", # Poor prompt
"You are a veterinarian. Write a 200-word guide explaining dog vaccination schedules for new pet owners. Format as bullet points with brief explanations for each vaccination type." # Good prompt
]
for i, prompt in enumerate(test_prompts, 1):
print(f"\nPrompt {i}: {prompt}")
validate_prompt_quality(prompt)
print("-" * 50)
Batch Processing and Efficiency
def batch_process_prompts(prompts_list, temperature=0.3):
"""
Process multiple prompts efficiently
"""
results = []
for i, prompt in enumerate(prompts_list):
try:
print(f"Processing prompt {i+1}/{len(prompts_list)}...")
response = client.responses.create(
model=MODEL,
input=prompt,
temperature=temperature
)
results.append({
'prompt': prompt[:50] + "..." if len(prompt) > 50 else prompt,
'response': response.output_text,
'success': True
})
except Exception as e:
results.append({
'prompt': prompt[:50] + "..." if len(prompt) > 50 else prompt,
'response': f"Error: {str(e)}",
'success': False
})
return results
# Example: Batch process product descriptions
products = ["Wireless Headphones", "Smart Watch", "Coffee Maker"]
prompts = [f"Write a compelling 50-word product description for {product}" for product in products]
batch_results = batch_process_prompts(prompts)
for result in batch_results:
print(f"Prompt: {result['prompt']}")
print(f"Success: {result['success']}")
print(f"Response: {result['response']}")
print("-" * 40)
Performance Optimization Tips
1. Choose the Right Model for Your Task
def model_comparison():
"""
Compare different models for the same task
"""
prompt = "Explain quantum computing to a business executive in 3 sentences"
models = ["gpt-4o-mini", "gpt-3.5-turbo"]
for model in models:
try:
if model == "gpt-4o-mini":
response = client.responses.create(
model=model,
input=prompt,
temperature=0.3
)
result = response.output_text
else:
response = client.chat.completions.create(
model=model,
messages=[{"role": "user", "content": prompt}],
temperature=0.3
)
result = response.choices[0].message.content
print(f"\n{model.upper()}:")
print(result)
print(f"Length: {len(result)} characters")
except Exception as e:
print(f"{model}: Error - {str(e)}")
model_comparison()
2. Token Management
def estimate_tokens(text):
"""
Rough token estimation (actual tokenization is more complex)
"""
# Rough approximation: 1 token ≈ 4 characters for English text
return len(text) // 4
def optimize_prompt_length(prompt, max_tokens=1000):
"""
Check if prompt might be too long and suggest optimizations
"""
estimated_tokens = estimate_tokens(prompt)
print(f"Estimated tokens: {estimated_tokens}")
if estimated_tokens > max_tokens:
print(f"⚠️ Prompt may be too long (>{max_tokens} tokens)")
print("Consider:")
print("- Breaking into smaller chunks")
print("- Removing unnecessary context")
print("- Using more concise language")
return False
else:
print("✅ Prompt length looks good")
return True
# Test with a long prompt
long_prompt = """
You are a senior business analyst with 15 years of experience in retail, e-commerce, and digital transformation. You have worked with Fortune 500 companies and helped them navigate complex market challenges. Your expertise includes data analysis, strategic planning, market research, competitive analysis, and change management.
I need you to analyze the current state of the electric vehicle market, including all major players, market segments, consumer preferences, technological trends, regulatory environment, supply chain considerations, charging infrastructure development, battery technology advancement, and future growth projections.
Please provide a comprehensive analysis that covers market size, growth rates, key competitors, market share distribution, consumer demographics, purchasing patterns, price sensitivity, feature preferences, brand loyalty, regional differences, policy impacts, environmental regulations, incentive programs, and technological disruptions.
Format your response as a detailed report with executive summary, methodology, findings, analysis, conclusions, and recommendations. Include specific data points, statistics, and actionable insights that would be valuable for strategic decision making.
""".strip()
optimize_prompt_length(long_prompt)
Real-World Application Examples
Customer Service Automation
def customer_service_classifier():
"""
Classify and route customer inquiries automatically
"""
prompt = """
You are a customer service AI that classifies incoming emails and suggests responses.
Categories: BILLING, TECHNICAL, SHIPPING, RETURN, GENERAL
Priority: HIGH, MEDIUM, LOW
For each email:
1. Classify the category and priority
2. Suggest a response approach
3. Flag if human escalation is needed
Email: "I ordered a laptop 3 weeks ago and it still hasn't arrived. My tracking shows it's stuck in customs. I need this for work tomorrow and I'm very frustrated. What can you do to help me?"
Respond in JSON format:
{
"category": "",
"priority": "",
"suggested_approach": "",
"escalate_to_human": true/false,
"reasoning": ""
}
"""
response = client.responses.create(
model=MODEL,
input=prompt,
temperature=0.1
)
print("Customer service classification:")
print(response.output_text)
customer_service_classifier()
Content Marketing Generator
def content_marketing_generator():
"""
Generate content marketing ideas based on business context
"""
prompt = """
You are a content marketing strategist for B2B SaaS companies.
Business Context:
- Company: Project management software for remote teams
- Target audience: Team leads and project managers at 50-500 person companies
- Pain points: Communication gaps, missed deadlines, unclear priorities
- Competitors: Asana, Monday.com, Trello
Generate 5 blog post ideas that would:
1. Address audience pain points directly
2. Position our software as the solution
3. Be shareable and engaging
4. Rank well for relevant search terms
For each idea, provide:
- Compelling headline
- Brief outline (3-4 main points)
- Target keyword
- Estimated word count
- Call-to-action suggestion
Format as a numbered list with clear sections.
"""
response = client.responses.create(
model=MODEL,
input=prompt,
temperature=0.6
)
print("Content marketing ideas:")
print(response.output_text)
content_marketing_generator()
Financial Analysis Assistant
def financial_analysis_assistant():
"""
Analyze financial data and provide insights
"""
financial_data = """
Q3 2024 Financial Summary:
Revenue: $2.3M (up 15% from Q2)
Gross Profit: $1.4M (61% margin)
Operating Expenses: $1.1M
Net Income: $300K
Cash Flow from Operations: $450K
Customer Acquisition Cost: $150
Average Revenue Per User: $95/month
Churn Rate: 8% monthly
"""
prompt = f"""
You are a CFO analyzing quarterly performance.
Financial Data:
{financial_data}
Provide analysis following this structure:
1. KEY METRICS ASSESSMENT
- Which metrics are strong vs concerning?
- How do margins and growth rates look?
2. CASH FLOW HEALTH
- Is the business generating sustainable cash flow?
- Any working capital concerns?
3. GROWTH SUSTAINABILITY
- Is customer acquisition efficient?
- What does the churn rate suggest?
4. STRATEGIC RECOMMENDATIONS
- Top 3 priorities for next quarter
- Specific actions to improve weakest metrics
5. RISK FACTORS
- What could derail this growth trajectory?
Be specific and actionable in your recommendations.
"""
response = client.responses.create(
model=MODEL,
input=prompt,
temperature=0.2
)
print("Financial analysis:")
print(response.output_text)
financial_analysis_assistant()
Quick Reference Guide
When to Use Each Technique
Technique | Best For | Avoid When |
Zero-shot | Simple, clear tasks | Complex reasoning needed |
Few-shot | Pattern learning, formatting | You have no good examples |
Chain-of-thought | Math, logic, analysis | Simple factual questions |
Generated knowledge | Domain-specific tasks | Information is widely known |
Least-to-most | Complex multi-step problems | Simple single-step tasks |
Self-refine | Quality-critical outputs | Time-sensitive requests |
Maieutic | High-stakes decisions | Routine, low-risk tasks |
Temperature Guide
- 0.0-0.2: Facts, analysis, JSON output, consistency needed
- 0.3-0.5: Business writing, explanations, balanced creativity
- 0.6-0.8: Marketing copy, creative content, brainstorming
- 0.9-1.0: Highly creative tasks, fiction, artistic content
Common Prompt Patterns
# Classification Pattern
classification_pattern = """
Classify [input] into one of: [categories]
Examples:
- "[example_input]" → [category]
- "[example_input]" → [category]
Now classify: "[actual_input]"
"""
# Analysis Pattern
analysis_pattern = """
You are a [role] analyzing [subject].
Context: [background_info]
Analyze this step-by-step:
1. [first_analysis_step]
2. [second_analysis_step]
3. [conclusion_step]
Be specific and show your reasoning.
"""
# Creative Generation Pattern
creative_pattern = """
Create [output_type] for [audience] that:
- [requirement_1]
- [requirement_2]
- [requirement_3]
Style: [tone_and_style]
Length: [specific_length]
Format: [output_format]
Examples of good [output_type]:
[examples]
"""
Troubleshooting Checklist
When your prompts aren't working well:
✅ Clarity Checklist
✅ Context Checklist
✅ Quality Checklist
✅ Technical Checklist
Conclusion
Prompt engineering is both an art and a science. The key to mastery is:
- Start with the basics: Learn to write clear, specific prompts
- Understand the techniques: Know when to use each method
- Practice systematically: Test and refine your approaches
- Build incrementally: Start simple, then add complexity as needed
- Learn from failures: Every bad output teaches you something
Remember: The best prompt engineers don't just know the techniques—they understand why each technique works and when to apply it. Start with simple prompts, experiment with the techniques in this guide, and gradually build your skills through practice.
The goal isn't to memorize every technique, but to develop an intuition for how to communicate effectively with AI systems to get the results you need.
Additional Resources
- OpenAI Documentation: https://platform.openai.com/docs
- OpenAI Cookbook: https://cookbook.openai.com
- Prompt Engineering Guide: https://www.promptingguide.ai
- Best Practices: https://help.openai.com/en/articles/6654000-best-practices-for-prompt-engineering-with-the-openai-api
Practice with real projects, iterate on your prompts, and don't be afraid to experiment. The more you practice, the more intuitive prompt engineering will become! cot_