The Idea
When people solve problems, they don't just think in their heads — they look things up, try things, and adjust based on what they find. ReAct gives AI the same ability: it alternates between reasoning (thinking about what to do next) and acting (using tools like search or calculators), then reads the results before deciding its next move.
This is the pattern behind virtually every AI agent you've encountered. ChatGPT browsing the web, Copilot running code, assistants calling APIs — they're all running some version of this think-act-observe loop. ReAct is the foundation that most modern agent systems build on.
Building Blocks
This composition combines:
Think Step by Step Ask It to SearchReAct weaves chain-of-thought reasoning together with tool use in an iterative loop — each reasoning step can trigger an action, and each observation feeds back into the next reasoning step.
The Loop
Each cycle through the loop adds real information. The AI thinks about what it still needs, takes one action to get it, reads the result, and then decides: do I have enough, or do I need another round?
See It in Action
Question: "What is the elevation range for the area that the eastern sector of the Colorado orogeny extends into?"
Each answer was grounded in a real search result — not guessed from memory.
The Hallucination Problem It Solves
Without ReAct (pure reasoning)
hallucination rate on factual questions. The AI reasons confidently but makes up facts it doesn't actually know.
With ReAct (reasoning + tools)
hallucination rate on the same questions. By actually looking things up, the AI grounds its answers in real information.
That's roughly a 10x reduction in made-up facts — simply by letting AI check its work against real sources.
Why This Works
Pure reasoning means AI has to rely entirely on what it memorized during training. Some of that is outdated, some is wrong, and some was never learned at all. ReAct lets AI admit what it doesn't know and go find out, just like a person would.
The thinking-out-loud step is equally important. Without it, AI might search randomly or call the wrong tool. The explicit reasoning trace forces it to articulate why it needs each piece of information, making its decisions more purposeful and its mistakes easier to spot.
The Composition
Think about what you need. Use a tool to get it. Read the result. Repeat until you have enough. Then answer. The simplest agent loop — and the foundation of nearly every AI agent built today.
When to Use This
- • Tasks that require external information — search, APIs, databases, calculators
- • Dynamic, unpredictable workflows where the next step depends on what you find
- • Multi-hop questions where you need to chain several lookups together
- • When interpretability matters — the explicit thought traces make debugging easy
When to Skip This
- • No tools needed — if the AI can answer from its own knowledge, plain chain-of-thought reasoning is faster and cheaper
- • Cost-sensitive tasks — each loop cycle resends the full conversation; ReWOO can use 5x fewer tokens for predictable tasks
- • Speed-critical tasks — sequential tool calls are slow; LLMCompiler runs independent calls in parallel
- • Predictable multi-step tasks — if you already know the full plan, Plan-and-Execute is more efficient
How It Relates
ReAct is the default agent pattern — the baseline that most other agent approaches improve upon. Plan-and-Execute adds explicit upfront planning. ReWOO optimizes cost by eliminating mid-loop AI calls. LLMCompiler optimizes speed by parallelizing independent steps.
It also serves as a foundation for more advanced systems: Reflexion adds self-critique after failure, LATS explores multiple reasoning paths in parallel, and autonomous agents like AutoGPT extend the loop with goal-setting and memory. If you understand ReAct, you understand the core of modern AI agents.