The Idea
Socrates was famous for a teaching method called "maieutics" — drawing out the truth by asking probing questions until contradictions surfaced. This technique does the same thing with AI.
Instead of just asking "Is this true?", you ask AI to explain why it could be true and why it could be false. Then you probe those explanations the same way. Eventually, the explanations that hold up under scrutiny point to the right answer, while the ones that contradict themselves fall apart.
Building Blocks
This composition combines:
Think Step by Step Check Your WorkIt uses step-by-step reasoning to build explanations, then checks them against each other for logical consistency. Contradictions reveal which side is wrong.
See It in Action
Question: "Can fish fly?"
But also: Gliding is not the same as powered flight. They launch from the water and coast — they can't gain altitude or sustain flight.
Why This Works
AI can generate convincing-sounding explanations for almost anything — even wrong answers. If you just ask "Is X true?", a confident but wrong explanation might fool you. But when you force AI to argue both sides and then probe each argument, wrong explanations tend to contradict themselves.
The key insight is that AI's explanations are noisy but not random. There's real knowledge buried in there. By exploring a tree of explanations and checking which ones hold up logically, you extract the consistent truth from the noise.
The Composition
For any claim, make AI argue both true and false. Probe each argument recursively. The side whose explanations contradict themselves is wrong. The side that stays consistent is right.
How to Apply This
- Ask: "Explain why [claim] could be TRUE" and then "Explain why [claim] could be FALSE"
- For each explanation, ask: "Is this explanation itself true? Why or why not?"
- Keep probing until you find explanations that AI consistently agrees or disagrees with
- Look for contradictions — when an argument undermines itself, that side is likely wrong
- The answer whose supporting arguments stay internally consistent is the most trustworthy
When to Use This
- • True/false or yes/no questions where AI gives you a confident but possibly wrong answer
- • Commonsense reasoning where the answer seems obvious but might not be
- • Claims you want to fact-check by making AI argue against its own position
- • When a simple "think step by step" answer feels too shallow
- • When logical consistency matters more than speed
When to Skip This
- • Open-ended questions — "What's the best vacation spot?" doesn't have a true/false structure to probe
- • Simple factual lookups — if AI consistently gets it right, probing is overkill
- • Speed-sensitive situations — the recursive questioning takes many rounds of conversation
How It Relates
This is a more rigorous cousin of Check Your Work. Where that technique asks AI to review its own answer once, Maieutic Prompting systematically forces AI to argue both sides and finds the truth through contradiction. It's also related to Self-Consistency — both techniques use multiple AI responses to find the right answer, but Self-Consistency uses simple voting while Maieutic Prompting uses logical consistency checking.