The Problem It Solves
Traditional AI interactions are black boxes. You send a request, stare at a loading spinner, and eventually get a wall of text. You have no idea what the agent is doing, no way to guide it, and no ability to stop it if it goes off track.
You send a prompt and wait. Maybe 10 seconds, maybe 30. A loading spinner. Then a wall of text appears all at once.
If the agent called tools, made decisions, or went down the wrong path along the way—you never saw any of it. No visibility, no control, no way to course-correct.
Every agent framework (LangChain, CrewAI, AutoGen) requires its own custom WebSocket implementation to show real-time progress.
Send and pray. You're a passenger, not a pilot.
Tokens stream in as they're generated. You see tool calls happen in real time. Progress updates tell you exactly what the agent is doing right now.
If the agent needs approval before taking an action—like deleting a file or sending an email—it pauses and asks. You're watching and guiding, not just waiting.
One protocol works with every agent framework. Build the frontend once, connect any backend.
Real-time visibility. You're in control.
How It Works at Runtime
- Frontend opens a connection. Your app connects to the agent's endpoint—typically via Server-Sent Events (SSE), though AG-UI is transport-agnostic and also supports WebSockets and HTTP Binary. This connection is the pipe through which all events will flow.
- Run begins. A
RUN_STARTEDevent arrives with a run ID. Your UI can now show an "Agent is working…" indicator—the user immediately knows something is happening. - Text streams in token by token.
TEXT_MESSAGE_CONTENTevents deliver the agent's words in small chunks (deltas). Users see the response forming in real time, not all at once. - Agent calls a tool.
TOOL_CALL_STARTandTOOL_CALL_ARGSevents show which tool is being invoked and with what parameters. Your UI can display "Searching database…" with the actual query visible. - Tool result arrives. The tool's response flows back through the event stream. Your UI updates to show what the agent found—full transparency into every step.
- Agent needs approval. For sensitive actions (sending an email, deleting a file, making a payment), the agent pauses and requests confirmation. The user sees exactly what the agent wants to do and clicks Approve or Deny.
- Run completes. A
RUN_FINISHEDevent signals the end. Your UI shows the final state—the user saw the whole journey, not just the destination.
Where AG-UI Shows Up
AG-UI transforms any agent interaction from an opaque process into a transparent, controllable experience. Here's where it makes the biggest difference.
AI Coding Assistants
Watch your AI assistant think through a problem, see it search files and read documentation in real time, and approve or reject code changes before they're applied.
Customer Service Dashboards
Support agents see the AI looking up customer data, formulating responses, and calling tools—in real time. They can intervene before the AI sends a response.
Data Analysis Workflows
Watch the agent query databases, process results, and build visualizations step by step. If it's going in the wrong direction, redirect it before it wastes time.
Document Review
See the agent work through a contract or report section by section. It highlights issues, suggests changes, and waits for your approval on sensitive edits.
Multi-Agent Orchestration
When multiple agents work together (via A2A), AG-UI shows you the whole picture: which agent is active, what it's doing, and how the workflow is progressing.
Approval Workflows
Human-in-the-loop by design. The agent pauses at decision points, shows you what it wants to do, and only proceeds after you approve. Critical for high-stakes actions.
Why It Matters
Think of the difference between receiving a letter and watching someone type a message to you. AG-UI turns agents from letter-senders into live collaborators.
AG-UI defines 16+ structured event types—text deltas, tool calls, state changes, lifecycle events, activity updates. Each one gives your frontend specific, actionable information about what the agent is doing right now.
Human-in-the-loop is built into the protocol. Agents can pause for approval before executing actions, ensuring your users stay in control of what the agent does on their behalf.
It works across frameworks—Microsoft Agent Framework, Google ADK, AWS Strands, LangGraph, CrewAI, and 10+ more all support it. Developed by CopilotKit with backing from Oracle, Microsoft, Google, and AWS.
Key Concepts in Plain Language
- Event Streaming: Instead of waiting for a complete response, AG-UI sends a stream of small events as the agent works. Your frontend updates in real time, like watching someone type.
- Lifecycle Events: RunStarted, StepStarted, StepFinished, RunFinished—these events tell your UI when the agent begins, progresses through steps, and completes its work.
- Text Deltas: The agent's text response arrives in small chunks (deltas), so users see words appearing as they're generated—no waiting for the full answer.
- Tool Call Events: When the agent calls a tool (like searching a database), your UI gets events showing which tool, what arguments, and the result. Full transparency.
- State Sync: AG-UI keeps the frontend and agent in sync using snapshots (full state) and deltas (changes only). If something gets out of sync, a snapshot resets everything.
- Human-in-the-Loop: The agent can pause and ask for approval before taking actions. Your UI shows what the agent wants to do and waits for user confirmation.
The Standard
AG-UI transforms agents from background processes into interactive collaborators. Structured events replace raw text streams, giving you visibility and control over every step of the agent's work.
What It Looks Like
When an agent works through AG-UI, your frontend receives a stream of structured events. Here's what a simple interaction looks like:
{ "type": "RUN_STARTED", "runId": "run-42", "threadId": "thread-7" }
{ "type": "TEXT_MESSAGE_START", "messageId": "msg-1" }
{ "type": "TEXT_MESSAGE_CONTENT", "messageId": "msg-1", "delta": "Let me look that up" }
{ "type": "TEXT_MESSAGE_CONTENT", "messageId": "msg-1", "delta": " for you..." }
{ "type": "TEXT_MESSAGE_END", "messageId": "msg-1" }
{ "type": "TOOL_CALL_START", "toolCallId": "tc-1", "toolCallName": "search_database" }
{ "type": "TOOL_CALL_ARGS", "toolCallId": "tc-1", "delta": "{\"query\": \"monthly sales\"}" }
{ "type": "TOOL_CALL_END", "toolCallId": "tc-1" }
{ "type": "TEXT_MESSAGE_START", "messageId": "msg-2" }
{ "type": "TEXT_MESSAGE_CONTENT", "messageId": "msg-2", "delta": "Here are the results..." }
{ "type": "TEXT_MESSAGE_END", "messageId": "msg-2" }
{ "type": "RUN_FINISHED", "runId": "run-42" }
Each event tells your UI exactly what to show: text streaming in, tool calls happening, the run completing. Your frontend reacts to each event type with the appropriate visual update.
How to Apply This
- • Use AG-UI when your users need to see agent work in progress, not just final results
- • Think about which agent actions should require human approval—anything that sends data externally, modifies files, or costs money
- • Start with basic text streaming events to show real-time responses, then layer on tool call visualization and state management
- • Check if your agent framework already supports AG-UI—15+ frameworks have built-in integration
- • Pair AG-UI with A2UI for the full picture: AG-UI handles the transport and events, A2UI handles what UI components to display
- • Use Server-Sent Events (SSE) as your primary transport—it's the simplest to set up and works in all modern browsers
- • For complex workflows, use lifecycle events to show users a progress timeline of what the agent has done and what's next
What to Watch Out For
Honest Limitations
- • AG-UI launched in May 2025 and some features (reasoning events, interrupt/branching) are still in draft status. The core events are stable, but expect the edges to evolve.
- • Event-based programming has a learning curve. If your team is used to simple request/response patterns, AG-UI's streaming model takes adjustment.
- • Not all agent frameworks are supported yet—OpenAI Agent SDK and AWS Bedrock Agents integrations are still in development.
- • For simple, single-response agents (ask a question, get an answer), AG-UI is overkill. A basic API call works fine for that.
- • State synchronization requires careful design. If your agent's state is complex, the snapshot-delta pattern needs thoughtful implementation.
Get Started
- AG-UI Documentation — Official protocol spec, event types, and integration guides
- AG-UI GitHub Repository — Source code, SDK packages, and examples
- CopilotKit AG-UI — Primary implementation with React integration
- AG-UI Dojo — Interactive demo to see AG-UI events in action