Single-Prompt Techniques

The building blocks. Each one works inside a single conversation with AI — no special tools, no multi-step setup. Master these first and everything else gets easier.

Mindset Shift

Lead With Your Idea

Bring your draft, not a blank page

Stop asking "What should I write?" and start saying "Here's my draft, make it better." When you lead with your own thinking, AI becomes a collaborator instead of a replacement.

Writing Planning Problem Solving
Process

Iterate

Don't expect perfection on the first try

One prompt rarely gets you where you want to go. Treat AI like a conversation—give feedback, refine, repeat. A few rounds of back-and-forth usually gets you something great.

Any Task Refinement Quality
Mindset

You Drive

AI is not always right

Don't blindly accept what AI gives you. Push back when something feels wrong. Question it. Reject it. You decide what's good — AI just offers suggestions.

Critical Thinking Judgment Control
Reasoning

Think Step by Step

Better thinking leads to better answers

Ask AI to show its reasoning. When it breaks a problem into steps, it catches errors and produces more accurate results. You can follow along and verify each step.

Problem Solving Accuracy Analysis
Expertise

Give It a Role

Shape answers with the right perspective

Tell AI who to be—a teacher, an editor, a consultant. A specific role focuses the response, drawing on relevant knowledge and the right communication style.

Any Task Expertise Perspective
Teaching

Show by Example

A few examples teach more than instructions

Don't just describe what you want—show it. Give AI 2–3 examples first, and it learns your style, format, and preferences better than any explanation could convey.

Writing Style Formatting Consistency
Fundamentals

Be Specific

Vague questions get vague answers

Include the details that matter: who it's for, how long, what tone, what format. The more specific your request, the more useful the result.

Any Task Quality Clarity
Accuracy

Let It Say I Don't Know

Stop AI from guessing when it shouldn't

AI sounds confident even when it's wrong. Give it permission to admit uncertainty, and you'll get answers you can actually trust.

Accuracy Research Trust
Current Info

Ask It to Search

Get fresh information, not outdated guesses

AI's knowledge has a cutoff date. But it can search the web for current information — prices, news, recent events — if you ask it to.

Research Current Events Accuracy
Process

One Thing at a Time

Big prompts get shallow results

Don't cram multiple tasks into one prompt. Break big requests into smaller steps and work through them one at a time.

Writing Planning Quality
Grounding

Give It the Source

Don't let AI guess — give it the text

Paste the actual document, article, or data into your prompt. AI works from facts instead of making things up.

Documents Accuracy Analysis
Context

Recall First

Let AI gather what it knows before answering

Ask AI to recall relevant facts about a topic before answering your question. The gathered context leads to more grounded, accurate responses.

Reasoning Writing No Sources
Reasoning

Zoom Out First

Get the principle before the answer

Ask AI for the general principle that governs your question before asking for the specific answer. Grounding in fundamentals leads to more accurate, deeper responses.

Science Learning Accuracy
Reasoning

Generate Examples First

Let AI recall similar problems

Ask AI to recall similar problems and their solutions before tackling yours. The generated examples prime better reasoning — especially when you don't have examples to provide.

Math Coding Problem Solving
Exploration

Ask for Options

Get choices, not just one answer

Ask AI for multiple approaches with tradeoffs. You see the alternatives, weigh the pros and cons, and make the call.

Decisions Strategy Planning
Visual

Show It

Upload an image when words aren't enough

Some things are hard to describe. Screenshots, diagrams, charts — just show AI what you're looking at.

Screenshots Debugging Design
Learning

Make It Familiar

Connect new ideas to what you already know

Ask AI to explain concepts using your interests — cooking, sports, music. New ideas click when built on familiar ones.

Learning Analogies Understanding
Critique

Challenge Me

AI wants to agree — ask it to push back

AI is trained to be agreeable. Ask it to play devil's advocate and poke holes in your ideas before you commit.

Decisions Feedback Strategy
Process

Plan First

See the steps before you start

For complex tasks, ask AI to outline the plan before executing. You catch problems early and stay in control of the process.

Projects Complex Tasks Control
Structure

Set the Format

Tell AI how to structure the output

Ask for tables, bullet points, or numbered steps. The same information becomes immediately usable when it's structured for your needs.

Comparisons Lists Reports
Audience

Tell It Who It's For

Match the output to the reader

The same topic explained to a beginner looks nothing like the expert version. Tell AI who's reading and it adjusts depth, language, and examples.

Explanations Content Training
Boundaries

Set Constraints

Tell AI where to stop

Don't just say what you want. Say what you don't want and where to stop. Constraints turn sprawling answers into focused ones.

Focus Concise Limits
Motivation

Make It Matter

AI tries harder when stakes are high

Add emotional stakes to your prompt. Research shows AI gives more thorough, more careful responses when you signal that something is important.

Quality Accuracy Effort
Clarity

Ask a Better Question

Let AI improve your question first

Don't just ask your question. Ask AI to suggest a better version first. You'll learn what details matter and get more useful answers.

Questions Clarity Learning
Process

Interview Me

Let AI ask the questions first

Ask AI to interview you before giving advice. You'll clarify your own thinking — and AI will know what you actually need instead of guessing.

Planning Personalization Decisions
Blind Spots

What Am I Missing?

Find the gaps you can't see

Ask AI to find blind spots, assumptions, and risks. Tell it to skip the positives and focus only on what you might be missing.

Planning Decisions Risk
Complexity

Break Down the Question

Split big questions into smaller ones

For complex questions, ask AI to identify the sub-questions first. Better answers come from understanding what you actually need to know.

Decisions Analysis Planning
Verification

Show the Sources

Know which facts to verify

Ask AI to list the key facts it used in its answer. You'll know exactly what to check before trusting the conclusion.

Research Accuracy Trust
Automation

Structure the Output

Make AI responses machine-readable

Ask AI to return answers in formats like JSON or CSV. When output has a predictable structure, other tools can use it automatically.

Integration Workflows Data
Extraction

Extract What Matters

Pull specific details from messy content

Give AI a long email, a photo, or a wall of text — and ask it to find just the pieces you need. Get the signal, skip the noise.

Documents Images Data
Reasoning

Thread of Thought

Walk through context piece by piece

For long or chaotic inputs, ask AI to process the context systematically — segment by segment — before forming an answer. Nothing gets missed.

Long Documents Accuracy Detail
Reasoning

Contrastive Chain-of-Thought

Show what NOT to do

Provide both correct and incorrect reasoning examples. AI learns what to avoid, not just what to follow — dramatically reducing common mistakes.

Examples Error Prevention Accuracy
Focus

System 2 Attention

Filter out the noise first

Ask AI to identify what in the context is actually relevant, strip the rest, then answer from cleaned context. Biased or noisy inputs stop polluting the output.

Bias Reduction Focus Clarity
Examples

Complexity-Based Prompting

Rich examples teach thoroughness

When providing examples, choose the most detailed and complex ones. A few rich examples outperform many simple ones — AI mirrors the depth it sees.

Few-Shot Quality Thoroughness

What Happens When You Combine Them?

Each technique above works on its own. But the real power comes from combining them into multi-step workflows — where AI reasons, acts, checks its work, and tries again.

Compositions

Multi-step workflows that chain Level 1 techniques together. These involve loops, pipelines, or orchestration — AI doing multiple things in sequence or checking its own work before finishing.

Pipeline

Chain It

Output becomes input for the next step

Feed the output of one prompt into the next. Build complex results step by step — brainstorm, evaluate, expand, refine. Each step builds on the last.

Sequential Multi-step Pipelines
Branching

Route It

Classify first, then specialize

Use one prompt to classify the input, then route to specialized handlers. Different inputs get different treatment — each path optimized for its task.

Branching Specialization Classification
Iteration

Loop Until Done

Iterate automatically until criteria are met

Keep running and refining until a condition is satisfied. Set quality criteria, let AI evaluate its own work, and loop until it meets the bar.

Iteration Quality Control Self-improvement
Combining

Stack Them

Combine techniques in a single prompt

Layer multiple techniques together — role + audience + examples + format — into one powerful prompt. Get more precise output without multiple calls.

Combining Single Prompt Precision
Verification

Check Your Work

AI often glosses over details

When AI gives you an evaluation or makes a claim, ask it to verify. It will often admit it didn't look closely enough the first time.

Code Review Accuracy Verification
Refinement

Critique and Revise

Let AI improve its own writing

Ask AI to critique its output for tone, clarity, and structure, then revise based on its own feedback. A simple loop that turns first drafts into polished work.

Writing Polish Style
Context

Index First

Show the map before the territory

Don't dump everything into context. Send an index first, let AI pick what it needs, then provide just that. Focused context beats overloaded prompts.

Large Projects Focus Accuracy
Code

Let Code Do It

Get a reusable tool, not a one-time answer

Instead of asking AI to calculate something, ask it to write code that does it. The code runs perfectly, handles any scale, and works forever.

Calculations Automation Reusable
Integration

Define Your Tools

Let AI call your functions natively

Use native function calling APIs to let AI request tool execution. Clear tool definitions guide AI to call the right function with the right arguments.

Function Calling APIs Automation
Integration

Give It Your Toolkit

AI assembles your existing pieces

Tell AI what functions and APIs you have. AI writes new code using your real tools, so the result actually works with your system.

Code APIs Custom Systems
Voting

Self-Consistency

Ask multiple times, take the majority answer

Generate several independent reasoning paths for the same question, then take a majority vote. Like polling a jury instead of asking one person.

Accuracy Reliability Reasoning
Agent Loop

ReAct

Think, act, observe, repeat

The foundational agent pattern. AI alternates between reasoning about what to do and actually doing it — searching, calculating, checking — grounding each step in real results.

Agents Tool Use Grounding
Retrieval

RAG Patterns

Search your knowledge, then answer

Retrieval-Augmented Generation: search a knowledge base first, then answer grounded in what you found. The pattern behind all AI + documents workflows.

Documents Search Grounding
Planning

Plan-and-Execute

Full plan upfront, then execute each step

Separate planning from execution completely. One call makes the full plan, then separate calls execute each step. Unlike ReAct, the plan is made upfront.

Complex Tasks Orchestration Control
Decomposition

Self-Ask

AI asks and answers its own sub-questions

AI generates follow-up sub-questions, answers each (optionally via search), then combines intermediate answers into a final response. Multi-hop reasoning made explicit.

Research Multi-hop Reasoning
Decomposition

Least-to-Most

Solve the easiest parts first

Break a hard problem into ordered sub-problems, solve the easiest first, and feed each answer into the next harder one. Each solution provides context for the next.

Complex Problems Step-by-Step Building Up
Reflection

Reflexion

Learn from failure, try again smarter

After getting a result, AI reflects on what went wrong and tries again with that self-critique as additional context. A self-improving loop with memory.

Self-Improvement Agents Memory
Planning

ReWOO

Plan all tool calls upfront, execute in batch

Plan every tool call before executing any. Run them all in one batch, then synthesize. Uses 5x fewer tokens than ReAct by avoiding repeated context.

Efficiency Tool Use Cost Saving
Parallel

LLMCompiler

Run independent tasks in parallel

Analyze task dependencies and run independent tool calls simultaneously. Like a compiler optimizing instruction scheduling — faster results, same accuracy.

Speed Parallel Orchestration
Optimization

APE

AI writes and tests its own prompts

Automatic Prompt Engineer: let AI generate candidate prompts, evaluate them on test cases, and select the best one. Prompt engineering without the guesswork.

Prompt Design Optimization Automation
Delegation

Meta-Prompting

A conductor delegates to expert personas

One AI acts as a conductor, creating specialized expert personas on the fly and delegating tasks to them. The conductor synthesizes their work into a final result.

Expert Roles Delegation Complex Tasks
Decomposition

DecomP

Delegate sub-tasks to specialists

Decompose complex tasks into sub-tasks and delegate each to a specialized handler — different models, code interpreters, or retrieval systems.

Modular Specialization Flexibility
Speed

Skeleton of Thought

Outline first, expand in parallel

Generate a concise skeleton outline, then expand each point simultaneously via parallel API calls. Faster than sequential generation with comparable quality.

Speed Structure Writing
Framework

DSPy

Program prompts like you program code

Declare what transformation you need, compose modules, then auto-optimize prompts with a compiler. Treating prompting as a programming problem.

Framework Optimization Systematic
Tool Use

Toolformer / TALM

AI learns when to call tools

Teaching AI to naturally embed tool calls in its generation — knowing when a calculator, search engine, or API would give a better answer than guessing.

Tool Use Self-Taught Integration
Action

Chain-of-Action

Pause reasoning to gather real info

AI generates structured action sequences, pausing its reasoning to seek external information via real actions across different systems and modalities.

Multi-modal External Data Grounding
Code

Program of Thoughts

Express reasoning as executable code

Instead of reasoning in words, AI expresses the logic as Python code. An interpreter executes it perfectly — no arithmetic mistakes, no rounding errors.

Math Precision Computation
Feedback

Recursive Chain-of-Feedback

Recursive critique until it's right

Recursively break down incorrect reasoning into smaller sub-problems, solve each individually, then reconstruct the corrected solution. Targeted self-correction.

Self-Correction Precision Debugging
Hints

Directional Stimulus

Small hints steer big results

A small model generates targeted hints — keywords, key points — that steer a larger model in the right direction. Focused guidance without retraining.

Guidance Efficiency Steering
Summarization

Chain of Density

Progressively denser summaries

Generate a summary, then iteratively add missing key information while keeping the same length. Each round packs in more — forcing compression and clarity.

Summarization Compression Clarity
Multi-modal

Multimodal Chain-of-Thought

Reason with images and text together

Chain-of-thought reasoning that incorporates images, diagrams, and other visual inputs alongside text. Two stages: generate rationale, then infer the answer.

Images Reasoning Visual
Selection

Active Prompting

Focus examples where AI is most uncertain

Find the questions where the model is most uncertain, add targeted examples for those specific cases. Focus human effort where it has the highest impact.

Efficiency Examples Targeted
Logic

Maieutic Prompting

Probe beliefs until contradictions surface

Build a tree of explanations and check them for logical consistency. Like the Socratic method — probe from multiple angles until the truth emerges.

Logic Verification Consistency
Verification

Cumulative Reasoning

Propose, verify, accumulate step by step

Three roles work together: one proposes reasoning steps, one verifies each step, one reports when enough verified steps answer the question. A growing proof.

Accuracy Step-by-Step Verification

From Workflows to Systems

Level 2 compositions are powerful workflows. But what happens when you combine multiple workflows into a unified system that perceives, reasons, plans, acts, and learns? That's Level 3.

Systems

Complete AI systems that combine multiple Level 2 compositions into unified architectures. These are purpose-built systems with perception, reasoning, planning, action, and learning — working together.

Architecture

Cognitive Loop

The universal 7-stage agent template

Perceive, Retrieve, Reason, Plan, Act, Verify, Reflect. The master blueprint that orchestrates Level 2 patterns into a complete thinking system.

Blueprint Complete Agent Universal
Routing

Adaptive Pattern Router

Automatically pick the best approach

A meta-controller that classifies incoming tasks and routes each to the optimal composition. Learns over time which patterns work best for which situations.

Intelligent Routing Optimization Adaptive
Collaboration

Multi-Agent Compositions

Specialized agents working together

Multiple AI agents with different roles — researcher, analyst, writer, reviewer — collaborating through debate, review, and division of labor.

Teamwork Specialization Quality
Search

LATS

Tree search over solution paths

Language Agent Tree Search: explore multiple solution paths like a chess engine, using self-evaluation to guide which branches to pursue and which to prune.

Exploration Decision Making Strategy
Autonomous

AutoGPT / BabyAGI

Goal-pursuing autonomous agents

Fully autonomous agents that pursue high-level goals by generating task lists, prioritizing them, and executing with tools and memory — no human in the loop.

Autonomous Goal-Driven Self-Directed
Learning

Voyager

A lifelong learning agent

An agent that explores, learns new skills, and stores them in a growing skill library. Each solved problem becomes a tool for future problems.

Skill Library Growth Exploration
Orchestration

JARVIS / HuggingGPT

One AI orchestrating many specialists

A controller LLM that decomposes requests, selects the best specialized AI model for each sub-task, executes them in order, and synthesizes the results.

Model Selection Orchestration Multi-Model
Memory

Generative Agents

AI with memory, reflection, and plans

Persistent AI agents with memory streams, periodic reflection, and multi-scale planning. They remember, learn from experience, and behave believably over time.

Persistence Memory Believable

The Biggest Picture

What if systems could coordinate with each other, improve themselves, and adapt their own architecture? Level 4 is where AI systems become platforms.

Meta-Architectures

Architectures that coordinate multiple Level 3 systems into adaptive, self-improving, or distributed platforms. These are the highest-level patterns — where AI systems manage other AI systems.

Platform

Cognitive Operating System

An OS for AI cognition

Treats Level 3 systems as applications running on a shared platform with unified memory, tools, and safety services. The operating system for artificial intelligence.

Platform Shared Services Unified
Hierarchy

Hierarchical Agent Architecture

Multi-timescale coordination

A 4-layer stack where higher-level agents set goals for lower-level ones, each operating at different timescales — from strategic planning to reactive execution.

Layers Delegation Timescales
Adaptive

Meta-Learning Agent System

Learning which approaches work best

A system that learns from experience which compositions work best for which tasks, automatically selecting and configuring the optimal approach each time.

Learning Selection Experience
Evolution

Self-Improving Systems

Systems that upgrade themselves

Systems that evaluate their own performance and systematically improve their prompts, skills, and architecture over time — with safety constraints.

Self-Improvement Evolution Safety
Distributed

Federated Agent Network

Agents collaborating without central control

Distributed autonomous agents that collaborate across boundaries, sharing learned knowledge through federated aggregation while maintaining local independence.

Distributed Collaboration Independent
Simulation

World Model Agents

Simulate before acting

Agents that build and maintain internal models of their environment, enabling planning through mental simulation rather than trial-and-error.

Prediction Simulation Planning
Embodied

Embodied Cognitive Architecture

AI meets the physical world

Integrating LLM-based cognition with physical world interaction — perception, reasoning, and action unified in agents that interact with real environments.

Robotics Physical Perception