The Problem It Solves

AI agents are getting good at individual tasks. But real work often requires specialists working together—and agents built with different frameworks can't talk to each other without expensive custom glue code. Google created A2A in April 2025 to fix this.

Without A2A

Each agent is an island. Your LangChain agent can't talk to a CrewAI agent or an AutoGen agent. Need them to collaborate? Build custom integration code for every pair.

IBM's VP Armand Ruiz called this "costly duct tape" between disparate systems. Every new agent you add multiplies the glue code you need to write and maintain.

Every agent pair needs its own custom integration. It doesn't scale.

With A2A

Agents publish "Agent Cards"—like business cards that describe what they can do, how to reach them, and what authentication they need. Any agent can discover, contact, and delegate tasks to any other agent.

Structured task handoffs, real-time status updates, and secure communication—all through one standard protocol that every agent speaks.

One standard. Every agent speaks the same language.

How It Works at Runtime

  1. Agents publish Agent Cards. Each agent hosts a JSON file at a well-known URL (e.g., /.well-known/agent-card.json) describing its skills, endpoint, and auth requirements.
  2. Client agent gets a complex task it can't handle alone. ("Plan a corporate retreat" or "Onboard this new employee.")
  3. Discovery. The client fetches Agent Cards from a registry or known URLs, reading each agent's capabilities to find the right specialists.
  4. Task delegation. The client creates a Task object and sends it via HTTP POST to the specialist's endpoint. The specialist can accept, reject, or ask for more information.
  5. Real-time status. The specialist streams progress updates back to the client—working, input-required, completed—so the client always knows where things stand.
  6. Artifacts returned. When finished, the specialist attaches structured deliverables (documents, data, recommendations) to the completed task.
  7. Client combines results. The orchestrating agent merges artifacts from multiple specialists into one unified response for the user.

Where A2A Shows Up

A2A shines when you need multiple specialized agents coordinating on complex workflows. Here's where it makes the biggest difference.

Customer Support Escalation

A front-line support agent handles basic questions. When it hits something complex, it hands off to a specialist agent—billing, technical, or account management—with full context preserved.

"The support agent recognized a billing issue and seamlessly handed it to the billing specialist agent."

Hiring Workflows

Google demonstrated a hiring pipeline where a primary agent delegates to sourcing, scheduling, and background-check agents. Each specialist does its part, reports back, and the primary orchestrates the whole process.

"Three agents handled candidate sourcing, interview scheduling, and background checks—coordinated by one."

Research Teams

A research coordinator agent breaks a complex question into parts, delegating literature search to one agent, data analysis to another, and synthesis to a third. Each specialist contributes its expertise.

"The coordinator split the research across specialists who each reported back with findings."

Enterprise Operations

ServiceNow uses A2A in their AI Agent Control Tower, letting agents from different departments—IT, HR, finance—collaborate on cross-functional requests without manual handoffs.

"An employee onboarding request triggers IT, HR, and facilities agents working in parallel."

DevOps Pipelines

A deployment agent discovers issues, delegates diagnosis to a monitoring agent, gets fix recommendations from an analysis agent, and coordinates the rollback—all through A2A.

"The deployment agent detected the failure and orchestrated diagnosis and rollback automatically."

Supply Chain Coordination

Inventory agents, logistics agents, and procurement agents from different vendors coordinate through A2A to optimize ordering, shipping, and stock levels across the supply chain.

"Agents from three different vendors coordinated a restocking workflow without custom integration."

Why It Matters

Multi-agent systems deliver 45% faster problem resolution and 60% higher accuracy on complex tasks—but only if agents can coordinate without custom plumbing for every pair.

A2A is backed by over 100 companies including Google, Microsoft, AWS, IBM, Salesforce, and SAP. In September 2025, IBM merged their competing Agent Communication Protocol (ACP) into A2A, consolidating the industry around one standard. Early adopters like PayPal are already using it in production.

Think of it like international trade agreements for AI. Without A2A, every pair of agents needs a bilateral deal. With A2A, everyone follows one set of rules and commerce flows freely.

A2A complements MCP perfectly—MCP handles how agents talk to tools (vertical), while A2A handles how agents talk to each other (horizontal). Together, they form a complete stack for agentic AI.

Key Concepts in Plain Language

The Standard

A2A lets agents from any vendor discover each other and collaborate on tasks without exposing their internal architecture. You get the best agent for each job, working together through a universal protocol.

What It Looks Like

Every A2A agent publishes an Agent Card—a JSON file that tells other agents what it can do and how to reach it:

{
  "name": "Research Assistant",
  "description": "Searches academic databases and summarizes papers",
  "url": "https://agent.example.com/a2a",
  "version": "1.0.0",
  "authentication": {
    "schemes": ["OAuth2"],
    "credentials": "Bearer token required"
  },
  "skills": [
    {
      "name": "literature_search",
      "description": "Search and summarize academic papers",
      "inputSchema": {
        "type": "object",
        "properties": {
          "query": { "type": "string" }
        }
      }
    }
  ],
  "capabilities": {
    "streaming": true,
    "pushNotifications": true
  }
}

Any agent that discovers this card can send it a task. The research agent accepts, works on it, and streams back results—all through the standard A2A protocol.

How to Apply This

What to Watch Out For

Honest Limitations

  • A2A is at v0.3.0 with a draft v1.0 in progress—expect continued evolution as the specification matures. Great for learning and prototyping, but plan for updates in production.
  • OpenAI has notably not joined the A2A consortium, which could fragment the multi-agent ecosystem.
  • Debugging distributed agent workflows is genuinely hard. Traditional debugging approaches struggle with non-deterministic, multi-agent interactions.
  • Agent-to-agent chains introduce unpredictable latency—each hop adds variable delay depending on task complexity.
  • For simple tool-calling scenarios, A2A is overkill. If a single agent with MCP tools can do the job, it probably should.

Get Started