The Idea
When a problem is too hard to solve in one shot, don't just break it into pieces — break it into pieces ordered from easiest to hardest, then solve them in that order. Each solved piece gives AI the context it needs to tackle the next, harder piece.
This is how good teaching works: you don't start with calculus. You start with addition, build up to multiplication, then algebra, then calculus. Each skill builds on the one before it. Least-to-Most does the same thing with AI's reasoning.
Building Blocks
This composition extends:
Break Down the Question Think Step by StepIt takes question decomposition and adds a critical twist: solve the sub-problems in order from easiest to hardest, with each answer feeding into the next.
See It in Action
Problem: "I run 3 miles Monday, double that Tuesday, add 2 miles Wednesday, and triple Wednesday's distance Thursday. How far did I run total?"
2. How far on Tuesday? (depends on Monday)
3. How far on Wednesday? (depends on Tuesday)
4. How far on Thursday? (depends on Wednesday)
5. What's the total? (depends on all of the above)
Each sub-problem is trivial on its own. The power is in the ordering — solving easy parts first gives AI the building blocks it needs for the harder parts.
Why This Works
Regular step-by-step reasoning fails when the problem is harder than any example AI has seen. It can solve simple cases but doesn't generalize to complex ones. Least-to-Most fixes this by ensuring AI never faces anything harder than a simple sub-problem.
The key insight: AI that can handle 2-step problems can't necessarily handle 10-step problems. But if you break a 10-step problem into ten 1-step problems and solve them in the right order, each step is trivially easy — and AI nails every one.
The Composition
Two stages: first decompose the problem into sub-problems ordered by difficulty. Then solve from easiest to hardest, giving each step the benefit of all previous answers.
How to Apply This
- "Before solving this, break it into sub-problems ordered from simplest to most complex"
- Solve the first (easiest) sub-problem
- Include that answer when asking the next sub-problem
- Keep going, building context each time, until you reach the original question
When to Use This
- • Problems with a natural easy-to-hard progression, where earlier parts feed into later parts
- • When AI handles simple versions of the problem but fails on complex ones
- • Multi-step calculations where each step depends on the one before
- • Tasks that require building up skills or facts incrementally
When to Skip This
- • Simple questions — decomposition overhead isn't worth it for easy problems
- • Problems without clear ordering — if there's no "easier" and "harder," this technique doesn't help
- • Dynamic questions — if you need to adapt based on what you find, use Self-Ask instead
How It Relates
Least-to-Most is the structured cousin of Self-Ask. Both decompose problems, but Self-Ask generates sub-questions adaptively (each one based on what was learned), while Least-to-Most plans the whole decomposition upfront and executes in a fixed easy-to-hard order.
It extends Break Down the Question from a single-prompt suggestion into a two-stage process with ordering and context accumulation. And it complements Chain It — where Chain It is a general-purpose multi-step pattern, Least-to-Most specifically orders steps by increasing difficulty.