The Idea
When you give AI examples to learn from, which examples should you pick? Most people choose randomly or grab whatever's handy. But some examples are too easy — AI already knows how to handle them. The real value is in showing AI the kinds of problems that trip it up.
Active Prompting finds those tricky problems automatically. It asks AI to answer each candidate question several times, then looks at how much the answers vary. High disagreement means AI is confused — and confused is exactly where your examples will have the most impact.
Building Blocks
This composition combines:
Show by Example Self-ConsistencyIt uses Self-Consistency's multiple-sampling technique to measure uncertainty, then feeds the hardest examples into Show by Example for maximum learning impact.
See It in Action
Imagine you have 50 math questions and can only afford to write detailed solutions for 4 of them to use as examples. Which 4 do you pick?
Green = consistent answers (AI is confident). Red = different answers each time (AI is confused).
Why This Works
Not all examples are created equal. Showing AI how to solve "2 + 2" teaches it nothing — it already knows that. Showing AI how to solve a tricky word problem it keeps getting wrong? That's where the learning happens.
Active Prompting is like a teacher who gives a diagnostic quiz first, finds out which topics the students struggle with, and then focuses the lesson on exactly those topics. It's the same amount of teaching effort, but aimed where it matters most.
The Composition
Test AI on many questions to find where it's confused. Write detailed examples for those hard questions. Use them as your few-shot demonstrations. Targeted examples beat random examples every time.
How to Apply This
- Ask AI your question several times (regenerate the response) and note when answers vary
- Questions where AI gives different answers each time are where it needs the most guidance
- Write clear step-by-step solutions for those inconsistent questions
- Include those worked solutions as examples whenever you ask similar questions in the future
When to Use This
- • You're building a set of examples for a recurring task and want to pick the best ones
- • You have limited time to write detailed examples and want maximum impact
- • AI performs inconsistently on your task — sometimes great, sometimes wrong
- • You want to understand which types of problems actually challenge AI
- • You're combining this with other techniques and want the strongest examples possible
When to Skip This
- • Simple tasks — if AI is already consistent, there's nothing to diagnose
- • One-off questions — this is about building reusable example sets, not answering a single question
- • Open-ended tasks — uncertainty measurement works best when answers can be compared directly
How It Relates
Active Prompting combines Show by Example with the sampling idea from Self-Consistency, but uses it for a completely different purpose. Self-Consistency samples multiple answers to find the right one. Active Prompting samples multiple answers to find where AI is weakest, then teaches it there.
It's complementary with Self-Consistency too — you can use Active Prompting to select better examples, then use Self-Consistency at inference time for even more accuracy. The two address different parts of the problem: better examples going in, and better answer selection coming out.