See the Difference
AI reviews code and finds a "bug." But when asked to verify, it realizes it was wrong.
AI: I found an issue on line 12. The loop doesn't handle empty arrays, which could cause an error when the function is called with no data.
Sounds confident. But is it actually right?
AI: I found an issue on line 12. The loop doesn't handle empty arrays...
AI: You're right to ask me to double-check. Looking more carefully at line 12, I see there's already a guard clause that returns early if the array is empty. My initial assessment was incorrect — this is not actually a bug.
AI catches its own mistake when pushed to look closer.
Why This Works
AI tends to skim. Looking deeply takes more effort, so first answers often miss details or make assumptions. But when you ask AI to go back and verify its claims, it looks more carefully — and frequently corrects itself.
This works for code reviews, story critiques, factual claims, analysis — anywhere AI might be glossing over details instead of truly examining them.
Follow-up Prompts That Work
- "Look at this more closely and verify your claims."
- "Are you sure? Check again."
- "Go back to the source and confirm this is accurate."
- "Re-examine your answer. Is everything you said actually correct?"
The Technique
When AI gives you an evaluation or makes a claim, ask it to verify. It will often admit it didn't look closely enough the first time.
When to Use This
- • Code reviews where AI flags potential bugs
- • Feedback on writing or creative work
- • Factual claims you want to trust
- • Analysis of documents or data
- • Any evaluation where accuracy matters