A new contribution proposes decentralized identity verification for A2A agents — addressing how autonomous systems can verify each other's identity before delegating sensitive tasks.
Google's Agent-to-Agent Protocol defines how AI agents communicate and collaborate — but it stops short of specifying how agents should:
PR #1455 addresses this gap by integrating Agent-Mesh, a cryptographic trust layer built on CMVK (a decentralized identity framework for agents).
Before an agent delegates a task to another agent, it performs a trust handshake:
from agentmesh.a2a import TrustHandshake, TrustedAgentCard
# Load peer's agent card (includes cryptographic identity)
peer_card = TrustedAgentCard.from_a2a_json(peer_card_json)
# Verify before delegating task
handshake = TrustHandshake(my_identity)
result = await handshake.verify_peer(peer_card)
if result.trusted:
# Safe to delegate — peer's identity is verified
task = await a2a_client.create_task(peer_card.url, task_spec)
The verification can check multiple factors:
The enterprise problem: Organizations deploying A2A agents need to control which external agents can interact with their systems. Without identity verification, any agent can claim to be anything.
Agent impersonation: A malicious agent claims to be a trusted service provider. Without identity verification, the victim agent has no way to detect the impersonation.
Capability inflation: An agent advertises capabilities it doesn't have, hoping to receive tasks (and data) it shouldn't access.
Delegation chain attacks: Agent A claims to be acting on behalf of trusted Agent B, but the delegation is forged.
The CMVK identity model doesn't require a central certificate authority. Agents generate their own cryptographic identities and can accumulate trust through:
This mirrors how human trust works — reputation built through interactions, not central decree.
The proposal stores trust metadata in an _agentmesh extension field within Agent Cards. Standard A2A implementations can ignore these fields entirely, preserving full interoperability.
{
"name": "Shopping Assistant",
"url": "https://shop.example.com/a2a",
"capabilities": ["search", "purchase"],
"_agentmesh": {
"cmvk_did": "did:cmvk:abc123...",
"trust_score": 0.95,
"attestations": [...]
}
}
The proposal is a contribution to the A2A ecosystem, not a core spec change. Several questions remain:
Contribution vs. spec: This is filed under contrib/, meaning it's community-contributed infrastructure rather than official A2A. Adoption will depend on whether the core team blesses the approach or alternatives emerge.
Enterprise adoption of agent-to-agent communication requires answers to identity and trust questions. Whether through Agent-Mesh or an alternative, the A2A ecosystem will need:
This PR is a concrete proposal for all three. Even if the implementation details change, the problems it addresses are real and will need solutions before A2A sees serious enterprise deployment.
As AI agents become autonomous participants in digital ecosystems — making purchases, handling data, performing actions on behalf of users — identity and trust become critical infrastructure. The Agent-Mesh Trust Layer proposal is an early but substantive contribution to this problem space.
Watch the PR discussion for feedback from the A2A core team and community.