The Idea

Socrates was famous for a teaching method called "maieutics" — drawing out the truth by asking probing questions until contradictions surfaced. This technique does the same thing with AI.

Instead of just asking "Is this true?", you ask AI to explain why it could be true and why it could be false. Then you probe those explanations the same way. Eventually, the explanations that hold up under scrutiny point to the right answer, while the ones that contradict themselves fall apart.

Building Blocks

This composition combines:

Think Step by Step Check Your Work

It uses step-by-step reasoning to build explanations, then checks them against each other for logical consistency. Contradictions reveal which side is wrong.

See It in Action

Question: "Can fish fly?"

1
Argue both sides
If TRUE
"Some fish can fly because flying fish have wing-like pectoral fins that let them glide above the water surface."
If FALSE
"Fish cannot truly fly because they lack powered flight — they are aquatic animals adapted for swimming, not flying."
↓ now probe deeper
2
Question each explanation
Probing the TRUE side
"Flying fish have wing-like fins" — is this true or false, and why?
AI explores further
TRUE: Yes, the Exocoetidae family has enlarged pectoral fins used for gliding.
But also: Gliding is not the same as powered flight. They launch from the water and coast — they can't gain altitude or sustain flight.
Probing the FALSE side
"They're adapted for swimming, not flying" — is this true or false, and why?
AI explores further
TRUE: Fish breathe through gills and have body plans optimized for water. Even "flying" fish spend 95%+ of their time underwater.
↓ check for contradictions
3
Find what's consistent
Contradiction found
The TRUE side's own explanation undermined itself: "Gliding is not powered flight" contradicts "fish can fly." The FALSE side's explanations all held up under questioning.
Conclusion
Answer: FALSE. Fish cannot truly fly. Flying fish glide, but gliding isn't flying — and the TRUE explanation admitted this when probed.

Why This Works

AI can generate convincing-sounding explanations for almost anything — even wrong answers. If you just ask "Is X true?", a confident but wrong explanation might fool you. But when you force AI to argue both sides and then probe each argument, wrong explanations tend to contradict themselves.

The key insight is that AI's explanations are noisy but not random. There's real knowledge buried in there. By exploring a tree of explanations and checking which ones hold up logically, you extract the consistent truth from the noise.

The Composition

For any claim, make AI argue both true and false. Probe each argument recursively. The side whose explanations contradict themselves is wrong. The side that stays consistent is right.

How to Apply This

When to Use This

When to Skip This

How It Relates

This is a more rigorous cousin of Check Your Work. Where that technique asks AI to review its own answer once, Maieutic Prompting systematically forces AI to argue both sides and finds the truth through contradiction. It's also related to Self-Consistency — both techniques use multiple AI responses to find the right answer, but Self-Consistency uses simple voting while Maieutic Prompting uses logical consistency checking.