The Idea

Regular chain-of-thought reasoning has a dangerous blind spot: when AI doesn't know something, it often just keeps reasoning and fills in the gap with a confident-sounding guess. Chain-of-Action adds an explicit checkpoint before each sub-answer: "Do I have the information I need, or am I about to make something up?"

When the answer is "I don't know this," the model pauses its reasoning and takes an action — searching the web, querying a knowledge base, or analyzing data — to get the real information. Only then does it continue. This "missing flag" check is what prevents the confident hallucination that plagues pure reasoning approaches.

Building Blocks

This composition builds on:

Think Step by Step Let Code Do It

Chain-of-Action extends step-by-step reasoning with executable actions — the model can pause at any point to gather real information before continuing its chain of thought.

Three Types of Action

🌐

Web Querying

Search for real-time information — current events, prices, weather, breaking news. Anything that changes over time.

📚

Knowledge Encoding

Tap into domain-specific knowledge bases — medical references, legal databases, technical documentation. Expert-level detail.

📊

Data Analyzing

Work with numbers and tables — financial reports, statistics, datasets. Compute rather than guess at quantities.

See It in Action

Question: "What was Apple's revenue in Q4 2024 and how does it compare to Samsung?"

1
Sub-question: Apple's Q4 revenue
Missing flag check
Do I know Apple's Q4 2024 revenue? MISSING — need current data
Action
WEB_QUERY "Apple Q4 2024 revenue earnings report"
→ Apple reported $94.9 billion revenue in Q4 2024.
2
Sub-question: Samsung's Q4 revenue
Missing flag check
Do I know Samsung's Q4 2024 revenue? MISSING — need current data
Action
WEB_QUERY "Samsung Q4 2024 revenue earnings"
→ Samsung reported $52.3 billion revenue in Q4 2024.
3
Sub-question: How do they compare?
Missing flag check
Can I compare from retrieved data? HAVE IT — can compute
Answer (no action needed)
Apple's Q4 2024 revenue of $94.9 billion exceeded Samsung's $52.3 billion by approximately 81%. Apple earned nearly double Samsung's revenue in this quarter.

Notice: actions fire only when information is actually missing. The comparison step needed no external lookup.

The Missing Flag Difference

Chain-of-Thought (no flag)

"Apple's Q4 2024 revenue was approximately $89 billion..."

Sounds confident, but it's a guess. The model didn't check whether it actually knew this — and the number is wrong.

Chain-of-Action (with flag)

"Do I know Apple's Q4 2024 revenue? No — let me look it up." Searches, finds $94.9B.

The explicit check caught the knowledge gap before it became a hallucination.

Why This Works

The core innovation is making the "do I know this?" check explicit. Without it, AI models tend to fill gaps with plausible-sounding information — they don't naturally distinguish between "I know this" and "I'm generating something that sounds right."

By forcing the model to raise a missing flag before each sub-answer, Chain-of-Action turns a risky blind spot into a deliberate decision point. The model either confirms it has the information or pauses to go get it. No more confident guessing.

The Composition

Before answering each sub-question, explicitly check: "Do I know this or am I guessing?" If guessing, pause and take action to get the real information. Only then continue reasoning.

When to Use This

When to Skip This

How It Relates

Chain-of-Action sits between pure chain-of-thought reasoning and full agent loops like ReAct. ReAct takes an action at every step; Chain-of-Action only acts when it detects missing information. This makes it more efficient when the AI already knows most of what it needs, but ensures it stops to look things up when it doesn't.

It shares the grounding philosophy with RAG Patterns (always check external sources) but is more selective — it only retrieves when the missing flag fires, rather than retrieving for every question. Think of it as on-demand retrieval guided by the model's own uncertainty.