Theorem T-084

Agent Communication Protocol: Decentralized Coordination

Beyond Client-Server

Current AI agent architectures rely on centralized coordination. Agents communicate through cloud APIs, creating bottlenecks and dependency. The Agent Communication Protocol (ACP) takes a different approach: agents as local processes communicating through shared memory, coordinated by the Trinity runtime.

THE ACP INSIGHT

Agents don't need to talk to a server to talk to each other. They need a shared memory buffer and a common language. ACP provides both—local-first coordination with cryptographic provenance.

The Protocol Stack

ACP operates in layers, each handling different aspects of agent coordination:

L4

Agent Layer

Trae, ByteDance, Claude Code, opencode—each agent implements ACP client interface

L3

Message Layer

Structured communication format with intent, payload, and provenance hash

L2

Coordination Layer

Trinity router directs messages between agents based on theater availability

L1

Transport Layer

Shared memory buffers via /dev/shm—zero-copy, zero-network, zero-latency

L0

Genesis Layer

Hardware-bound identity ensures agents cannot be cloned or replayed on different machines

Integrated Agents

ACP is not theoretical—it is implemented. Multiple agent systems already communicate through the protocol:

Trae Agent

ByteDance development environment agent. Baked directly into Trinity runtime for code generation and analysis.

ByteDance Integration

Claude Code Agent

Anthropic's Claude optimized for local execution. Runs on dGPU with provenance tracking on every suggestion.

Anthropic Integration

opencode CLI Agent

Command-line interface agent for headless operation. Same capabilities, terminal access.

OpenCode Integration

Custom Agents

User-defined agents through ACP SDK. Build your own and integrate into the Trinity ecosystem.

Extensible Architecture

Message Structure

Every ACP message carries provenance:

Decentralized by Design

ACP has no central coordinator. Agents discover each other through shared memory registration. Messages route through the Trinity theater system based on real-time availability. If one agent fails, others continue. If one theater overheats, work migrates. The system is resilient because it has no single point of failure.

The exact message format, the discovery protocol, the failure recovery mechanisms—those remain within the protected core. We present the architecture. The implementation is the sauce.