Prompt Engineering

What

Crafting inputs to get the best outputs from LLMs. No training required — just better instructions.

Core techniques

Be specific

Bad: “Summarize this” Good: “Summarize this article in 3 bullet points, focusing on the key findings”

Few-shot examples

Provide examples of the desired input/output format:

Classify the sentiment: "Great product!" → Positive
Classify the sentiment: "Terrible service" → Negative
Classify the sentiment: "It was okay" →

Chain of thought

Ask the model to reason step by step:

Solve this step by step:
Q: If a train travels 120km in 2 hours, what is its speed?
A: Let me think step by step...

System prompts

Set the role and constraints:

You are a Python expert. Only provide code that uses the standard library.
Keep explanations under 3 sentences.

Tree of thought

Explore multiple reasoning branches, score each, use search (BFS/DFS) to find best path. Good for planning and strategy problems.

Patterns

TechniqueWhen
Zero-shotSimple, well-defined tasks
Few-shotWhen you need specific format/style
Chain of thoughtReasoning, math, multi-step logic
Role promptingWhen domain expertise matters
Tree of thoughtComplex planning, strategy, puzzles
Self-consistencyMath, reasoning (sample multiple, majority vote)

Reasoning models caveat

With o1/o3/R1-style thinking models, explicit CoT prompting hurts performance. These models reason internally — adding “think step by step” is counterproductive. Use minimal, concise prompts.

Automated optimization

  • DSPy: define Signatures (Input Output), provide examples, compiler optimizes the prompt automatically. Manual prompting becoming “assembly language.”
  • Metaprompt: use a powerful model to write system prompts for cheaper production models.