The Problem It Solves
AI agents are getting good at individual tasks. But real work often requires specialists working together—and agents built with different frameworks can't talk to each other without expensive custom glue code. Google created A2A in April 2025 to fix this.
Each agent is an island. Your LangChain agent can't talk to a CrewAI agent or an AutoGen agent. Need them to collaborate? Build custom integration code for every pair.
IBM's VP Armand Ruiz called this "costly duct tape" between disparate systems. Every new agent you add multiplies the glue code you need to write and maintain.
Every agent pair needs its own custom integration. It doesn't scale.
Agents publish "Agent Cards"—like business cards that describe what they can do, how to reach them, and what authentication they need. Any agent can discover, contact, and delegate tasks to any other agent.
Structured task handoffs, real-time status updates, and secure communication—all through one standard protocol that every agent speaks.
One standard. Every agent speaks the same language.
How It Works at Runtime
- Agents publish Agent Cards. Each agent hosts a JSON file at a well-known URL (e.g.,
/.well-known/agent-card.json) describing its skills, endpoint, and auth requirements. - Client agent gets a complex task it can't handle alone. ("Plan a corporate retreat" or "Onboard this new employee.")
- Discovery. The client fetches Agent Cards from a registry or known URLs, reading each agent's capabilities to find the right specialists.
- Task delegation. The client creates a Task object and sends it via HTTP POST to the specialist's endpoint. The specialist can accept, reject, or ask for more information.
- Real-time status. The specialist streams progress updates back to the client—
working,input-required,completed—so the client always knows where things stand. - Artifacts returned. When finished, the specialist attaches structured deliverables (documents, data, recommendations) to the completed task.
- Client combines results. The orchestrating agent merges artifacts from multiple specialists into one unified response for the user.
Where A2A Shows Up
A2A shines when you need multiple specialized agents coordinating on complex workflows. Here's where it makes the biggest difference.
Customer Support Escalation
A front-line support agent handles basic questions. When it hits something complex, it hands off to a specialist agent—billing, technical, or account management—with full context preserved.
Hiring Workflows
Google demonstrated a hiring pipeline where a primary agent delegates to sourcing, scheduling, and background-check agents. Each specialist does its part, reports back, and the primary orchestrates the whole process.
Research Teams
A research coordinator agent breaks a complex question into parts, delegating literature search to one agent, data analysis to another, and synthesis to a third. Each specialist contributes its expertise.
Enterprise Operations
ServiceNow uses A2A in their AI Agent Control Tower, letting agents from different departments—IT, HR, finance—collaborate on cross-functional requests without manual handoffs.
DevOps Pipelines
A deployment agent discovers issues, delegates diagnosis to a monitoring agent, gets fix recommendations from an analysis agent, and coordinates the rollback—all through A2A.
Supply Chain Coordination
Inventory agents, logistics agents, and procurement agents from different vendors coordinate through A2A to optimize ordering, shipping, and stock levels across the supply chain.
Why It Matters
Multi-agent systems deliver 45% faster problem resolution and 60% higher accuracy on complex tasks—but only if agents can coordinate without custom plumbing for every pair.
A2A is backed by over 100 companies including Google, Microsoft, AWS, IBM, Salesforce, and SAP. In September 2025, IBM merged their competing Agent Communication Protocol (ACP) into A2A, consolidating the industry around one standard. Early adopters like PayPal are already using it in production.
Think of it like international trade agreements for AI. Without A2A, every pair of agents needs a bilateral deal. With A2A, everyone follows one set of rules and commerce flows freely.
A2A complements MCP perfectly—MCP handles how agents talk to tools (vertical), while A2A handles how agents talk to each other (horizontal). Together, they form a complete stack for agentic AI.
Key Concepts in Plain Language
- Agent Card: A JSON file that describes what an agent can do, how to reach it, and what security it requires. Think of it as a business card that other agents can read to decide whether to work with you.
- Task Lifecycle: Every job follows a clear path: Working, Input Required, Auth Required, Completed, Failed, Canceled, or Rejected. Both agents always know where things stand.
- Discovery: Agents find each other through well-known URLs (like a phone book), registries (like a directory), or direct endpoints. No one needs to hard-code connections.
- Delegation: One agent sends a task to another, which can accept, reject, or ask for more information. The requesting agent gets real-time status updates via streaming.
- Artifacts: When an agent finishes a task, it returns deliverables—documents, data, reports—in a structured format that other agents can use directly.
- Framework Agnostic: A LangChain agent, a CrewAI agent, and a Google ADK agent can all participate in the same workflow. The framework is hidden behind the A2A protocol.
The Standard
A2A lets agents from any vendor discover each other and collaborate on tasks without exposing their internal architecture. You get the best agent for each job, working together through a universal protocol.
What It Looks Like
Every A2A agent publishes an Agent Card—a JSON file that tells other agents what it can do and how to reach it:
{
"name": "Research Assistant",
"description": "Searches academic databases and summarizes papers",
"url": "https://agent.example.com/a2a",
"version": "1.0.0",
"authentication": {
"schemes": ["OAuth2"],
"credentials": "Bearer token required"
},
"skills": [
{
"name": "literature_search",
"description": "Search and summarize academic papers",
"inputSchema": {
"type": "object",
"properties": {
"query": { "type": "string" }
}
}
}
],
"capabilities": {
"streaming": true,
"pushNotifications": true
}
}
Any agent that discovers this card can send it a task. The research agent accepts, works on it, and streams back results—all through the standard A2A protocol.
How to Apply This
- • Think about which tasks benefit from specialized agents working together rather than one agent doing everything
- • Look for frameworks that support A2A natively—LangChain, Google ADK, and BeeAI all have built-in support
- • Start simple: two agents handing off a task. Once that works, scale to more complex multi-agent orchestration
- • Design your Agent Cards carefully—clear capability descriptions help other agents decide when to delegate to yours
- • Use A2A with MCP together: A2A for agent-to-agent coordination, MCP for each agent's tool access
- • For enterprise use, evaluate the OAuth 2.0 security model and Linux Foundation governance for compliance
- • Consider starting with internal agents before opening up to external ones—trust boundaries matter
What to Watch Out For
Honest Limitations
- • A2A is at v0.3.0 with a draft v1.0 in progress—expect continued evolution as the specification matures. Great for learning and prototyping, but plan for updates in production.
- • OpenAI has notably not joined the A2A consortium, which could fragment the multi-agent ecosystem.
- • Debugging distributed agent workflows is genuinely hard. Traditional debugging approaches struggle with non-deterministic, multi-agent interactions.
- • Agent-to-agent chains introduce unpredictable latency—each hop adds variable delay depending on task complexity.
- • For simple tool-calling scenarios, A2A is overkill. If a single agent with MCP tools can do the job, it probably should.
Get Started
- A2A Protocol Specification — Official spec with full protocol details and examples
- A2A GitHub Repository — Source code, samples, and framework integrations
- Google Developer Blog: A2A — Origin story and architecture overview from Google
- IBM: What is A2A Protocol? — Enterprise perspective and practical guide