The Idea

When you give AI examples to learn from, which examples should you pick? Most people choose randomly or grab whatever's handy. But some examples are too easy — AI already knows how to handle them. The real value is in showing AI the kinds of problems that trip it up.

Active Prompting finds those tricky problems automatically. It asks AI to answer each candidate question several times, then looks at how much the answers vary. High disagreement means AI is confused — and confused is exactly where your examples will have the most impact.

Building Blocks

This composition combines:

Show by Example Self-Consistency

It uses Self-Consistency's multiple-sampling technique to measure uncertainty, then feeds the hardest examples into Show by Example for maximum learning impact.

See It in Action

Imagine you have 50 math questions and can only afford to write detailed solutions for 4 of them to use as examples. Which 4 do you pick?

1
Test AI on each question multiple times
Ask each question 5 times and see how much the answers vary
Results
"What is 15 + 27?"
5/5
"Train speed problem"
2/5
"Area of a rectangle"
5/5
"Multi-step word problem"
3/5
"Percentage discount calc"
3/5

Green = consistent answers (AI is confident). Red = different answers each time (AI is confused).

2
Pick the most uncertain questions
Selected for annotation
The train speed problem and the multi-step word problem have the highest disagreement. These are where AI needs the most help — and where examples will have the biggest impact.
3
Write detailed solutions for those questions
You write step-by-step reasoning for each selected question
These become your few-shot examples — teaching AI the exact types of reasoning it struggles with.
4
Use those targeted examples going forward
Result
Now when you ask AI new questions, you include these carefully chosen examples in your prompt. Because they target AI's actual weaknesses, they improve performance far more than random examples would.

Why This Works

Not all examples are created equal. Showing AI how to solve "2 + 2" teaches it nothing — it already knows that. Showing AI how to solve a tricky word problem it keeps getting wrong? That's where the learning happens.

Active Prompting is like a teacher who gives a diagnostic quiz first, finds out which topics the students struggle with, and then focuses the lesson on exactly those topics. It's the same amount of teaching effort, but aimed where it matters most.

The Composition

Test AI on many questions to find where it's confused. Write detailed examples for those hard questions. Use them as your few-shot demonstrations. Targeted examples beat random examples every time.

How to Apply This

When to Use This

When to Skip This

How It Relates

Active Prompting combines Show by Example with the sampling idea from Self-Consistency, but uses it for a completely different purpose. Self-Consistency samples multiple answers to find the right one. Active Prompting samples multiple answers to find where AI is weakest, then teaches it there.

It's complementary with Self-Consistency too — you can use Active Prompting to select better examples, then use Self-Consistency at inference time for even more accuracy. The two address different parts of the problem: better examples going in, and better answer selection coming out.