<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Combinatorics on Corebaseit — POS · EMV · Payments · AI</title><link>https://corebaseit.com/tags/combinatorics/</link><description>Recent content in Combinatorics on Corebaseit — POS · EMV · Payments · AI</description><generator>Hugo -- gohugo.io</generator><language>en-us</language><managingEditor>contact@corebaseit.com (Vincent Bevia)</managingEditor><webMaster>contact@corebaseit.com (Vincent Bevia)</webMaster><lastBuildDate>Fri, 24 Apr 2026 10:00:00 +0100</lastBuildDate><atom:link href="https://corebaseit.com/tags/combinatorics/index.xml" rel="self" type="application/rss+xml"/><item><title>The Hidden Mathematics of Multi-Agent AI: Why Agent Communication Does Not Scale Linearly</title><link>https://corebaseit.com/corebaseit_posts_in_review/series/agents/multi-agent-communication-topology_part4/</link><pubDate>Fri, 24 Apr 2026 10:00:00 +0100</pubDate><author>contact@corebaseit.com (Vincent Bevia)</author><guid>https://corebaseit.com/corebaseit_posts_in_review/series/agents/multi-agent-communication-topology_part4/</guid><description>&lt;p>&lt;em>This is Part IV of my series on multi-agent AI architecture. &lt;a class="link" href="https://corebaseit.com/posts_in_review/super-agents-multi-agent-communication/" >Part I&lt;/a> covered centralized orchestration. &lt;a class="link" href="https://corebaseit.com/posts_in_review/swarm-intelligence-opposite-architectural-bet/" >Part II&lt;/a> covered swarm intelligence. &lt;a class="link" href="https://corebaseit.com/posts_in_review/multi-agent-systems-scale-vertically/" >Part III&lt;/a> covered the horizontal scaling problem. This post looks at the communication topology underneath all of them.&lt;/em>&lt;/p>
&lt;p>Multi-agent AI systems are often described as networks of specialized agents working toward a shared goal.&lt;/p>
&lt;p>One agent plans. Another searches. Another writes code. Another validates. Another interacts with tools. Another supervises the run.&lt;/p>
&lt;p>At small scale, that sounds clean.&lt;/p>
&lt;p>The problem appears when every agent can talk directly to every other agent. Underneath the architecture sits an old counting problem:&lt;/p>
&lt;p>How many communication channels exist in a fully connected group?&lt;/p>
$$
C_c = \frac{n(n-1)}{2}
$$&lt;p>Where:&lt;/p>
&lt;ul>
&lt;li>$C_c$ is the number of communication channels&lt;/li>
&lt;li>$n$ is the number of agents, systems, teams, or participants&lt;/li>
&lt;/ul>
&lt;!-- IMAGE: communication_channel.png -->
&lt;p style="text-align: center;">
&lt;img src="https://corebaseit.com/diagrams/communication_channel.png" alt="Communication topology for a fully connected multi-agent system" style="max-width: 900px; width: 100%;" />
&lt;/p>
&lt;p>That formula counts unique pairwise connections among $n$ participants. It shows up in graph theory, network design, team coordination, and distributed systems. It also applies to agentic AI.&lt;/p>
&lt;p>If every agent can talk to every other agent, communication complexity grows quadratically with the number of agents, not linearly.&lt;/p>
&lt;h2 id="the-counting-problem">The counting problem
&lt;/h2>&lt;p>The same formula can be written as:&lt;/p>
$$
C_c = \frac{n^2 - n}{2}
$$&lt;p>For large values of $n$, the dominant term is $n^2$, so the growth rate is:&lt;/p>
$$
O(n^2)
$$&lt;p>That matters because each new agent adds a path to every agent already in the system, not a single new path overall.&lt;/p>
&lt;p>Adding the $n$th agent creates:&lt;/p>
$$
n - 1
$$&lt;p>new possible channels.&lt;/p>
&lt;p>The marginal coordination cost rises with the size of the system.&lt;/p>
&lt;p>Here is the simple count:&lt;/p>
&lt;table>
&lt;thead>
&lt;tr>
&lt;th>Number of agents&lt;/th>
&lt;th>Direct communication channels&lt;/th>
&lt;/tr>
&lt;/thead>
&lt;tbody>
&lt;tr>
&lt;td>2&lt;/td>
&lt;td>1&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td>3&lt;/td>
&lt;td>3&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td>4&lt;/td>
&lt;td>6&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td>5&lt;/td>
&lt;td>10&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td>10&lt;/td>
&lt;td>45&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td>20&lt;/td>
&lt;td>190&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td>50&lt;/td>
&lt;td>1,225&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td>100&lt;/td>
&lt;td>4,950&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td>1,000&lt;/td>
&lt;td>499,500&lt;/td>
&lt;/tr>
&lt;/tbody>
&lt;/table>
&lt;p>This is the same reason communication breaks down in large teams, microservice meshes, and distributed control systems. The AI version is not exempt from the math.&lt;/p>
&lt;h2 id="why-this-matters-for-agentic-ai">Why this matters for agentic AI
&lt;/h2>&lt;p>In a multi-agent system, a communication channel is rarely just a network path or API call.&lt;/p>
&lt;p>It can also be:&lt;/p>
&lt;ul>
&lt;li>a message route&lt;/li>
&lt;li>a context-sharing path&lt;/li>
&lt;li>a delegation edge&lt;/li>
&lt;li>a tool invocation dependency&lt;/li>
&lt;li>a state synchronization path&lt;/li>
&lt;li>a trust boundary&lt;/li>
&lt;li>an audit path&lt;/li>
&lt;li>a retry path&lt;/li>
&lt;li>an authorization relationship&lt;/li>
&lt;/ul>
&lt;p>Once agents are allowed to exchange context, trigger actions, or call tools on each other&amp;rsquo;s behalf, the number of possible interactions becomes an operational problem rather than a graph-theoretic one.&lt;/p>
&lt;p>At that point the real questions are architectural:&lt;/p>
&lt;ul>
&lt;li>Who is allowed to talk to whom?&lt;/li>
&lt;li>Which agent owns the source of truth?&lt;/li>
&lt;li>Which agent can make decisions?&lt;/li>
&lt;li>Which agent can call external tools?&lt;/li>
&lt;li>Which messages are trusted?&lt;/li>
&lt;li>Which state is canonical?&lt;/li>
&lt;li>Which actions are reversible?&lt;/li>
&lt;li>Which outputs are audited?&lt;/li>
&lt;/ul>
&lt;p>That is where system design starts to matter more than model capability.&lt;/p>
&lt;h2 id="a-super-agent-changes-the-topology">A super agent changes the topology
&lt;/h2>&lt;p>One way to cut this complexity is to introduce a super agent: an orchestrator, supervisor, coordinator, or controller.&lt;/p>
&lt;p>Instead of allowing every worker to talk directly to every other worker, the communication pattern becomes structured. Worker agents hand results upward, receive assignments downward, and interact through a coordinating layer.&lt;/p>
&lt;p>In a fully connected system:&lt;/p>
$$
C_c = \frac{n(n-1)}{2}
$$&lt;p>In a simple hub-and-spoke system, the number of direct communication paths is closer to:&lt;/p>
$$
C_c \approx n
$$&lt;p>The first grows as:&lt;/p>
$$
O(n^2)
$$&lt;p>The second grows as:&lt;/p>
$$
O(n)
$$&lt;p>That is the gap between a topology that becomes dense very quickly and one that remains tractable as the system grows.&lt;/p>
&lt;p>This is why orchestration matters. A super agent is a way to control coordination cost.&lt;/p>
&lt;h2 id="the-trade-off">The trade-off
&lt;/h2>&lt;p>The orchestrator pattern is not free.&lt;/p>
&lt;p>A super agent can become a bottleneck, a single point of failure, a latency amplifier, and the place where trust, policy, and context all concentrate. If the coordinator is wrong, overloaded, or poorly instrumented, the whole system pays for it.&lt;/p>
&lt;p>So the goal is not to put one boss agent above everything. The goal is to choose the right topology for the problem.&lt;/p>
&lt;p>Sometimes the right design is hierarchical. Sometimes it is event-driven. Sometimes it uses shared state. Sometimes strict role-based delegation is enough. And sometimes a small fully connected group is perfectly reasonable.&lt;/p>
&lt;p>The mistake is assuming that more agent-to-agent communication automatically means more intelligence. In practice it often means more coordination cost, more failure modes, and more ambiguity about authority.&lt;/p>
&lt;h2 id="the-architectural-lesson">The architectural lesson
&lt;/h2>&lt;p>Agentic AI is a systems problem as much as a model problem.&lt;/p>
&lt;p>The shape of the agent graph matters as much as the capability of any individual agent.&lt;/p>
&lt;p>A group of strong agents with poor topology can produce fragile, inconsistent, expensive, and hard-to-audit behavior. A group of simpler agents with clear boundaries, structured coordination, and well-defined authority can be much more reliable.&lt;/p>
&lt;p>Software engineers have seen this before:&lt;/p>
&lt;ul>
&lt;li>microservices&lt;/li>
&lt;li>distributed systems&lt;/li>
&lt;li>event-driven architectures&lt;/li>
&lt;li>organizational design&lt;/li>
&lt;li>API ecosystems&lt;/li>
&lt;li>large engineering teams&lt;/li>
&lt;/ul>
&lt;p>The topology matters. Authority boundaries matter. Observability matters. Failure handling matters.&lt;/p>
&lt;p>The same is true for AI agents.&lt;/p>
&lt;h2 id="three-points-worth-keeping-in-view">Three points worth keeping in view
&lt;/h2>&lt;h3 id="agent-count-is-not-system-capability">Agent count is not system capability
&lt;/h3>&lt;p>Adding more agents does not automatically make a system more intelligent. It may increase specialization, but it also increases coordination cost. Past a certain point, communication overhead becomes the dominant limit rather than model quality.&lt;/p>
&lt;h3 id="fully-connected-agent-networks-grow-quadratically">Fully connected agent networks grow quadratically
&lt;/h3>&lt;p>If every agent can talk to every other agent, the number of possible communication channels is:&lt;/p>
$$
\frac{n(n-1)}{2}
$$&lt;p>Doubling the agent count roughly quadruples the number of possible direct interactions. That is why unrestricted agent-to-agent communication becomes hard to reason about.&lt;/p>
&lt;h3 id="scalable-agent-systems-need-structure">Scalable agent systems need structure
&lt;/h3>&lt;p>The next generation of agentic AI systems will not come from simply wiring together more agents. It will come from better architecture:&lt;/p>
&lt;ul>
&lt;li>orchestrators&lt;/li>
&lt;li>supervisors&lt;/li>
&lt;li>message buses&lt;/li>
&lt;li>shared memory&lt;/li>
&lt;li>role-based permissions&lt;/li>
&lt;li>tool boundaries&lt;/li>
&lt;li>audit trails&lt;/li>
&lt;li>state ownership&lt;/li>
&lt;li>validation layers&lt;/li>
&lt;li>recovery mechanisms&lt;/li>
&lt;/ul>
&lt;p>The intelligence of a multi-agent system depends on the agents, but it also depends on the structure connecting them.&lt;/p>
&lt;h2 id="closing-thought">Closing thought
&lt;/h2>&lt;p>A multi-agent system is a communication system. Communication systems have mathematics, and the math does not stay polite as the node count rises.&lt;/p>
$$
C_c = \frac{n(n-1)}{2}
$$&lt;p>is just a counting formula, but it is enough to expose the design risk.&lt;/p>
&lt;p>Fully connected multi-agent systems do not scale gracefully. When every agent can talk to every other agent, the number of possible interactions grows much faster than the number of agents.&lt;/p>
&lt;p>That affects latency, cost, auditability, debugging, safety, context management, security boundaries, tool permissions, state consistency, and recovery.&lt;/p>
&lt;p>Once those channels can trigger tools, change state, or make business decisions, they become control paths.&lt;/p>
&lt;p>Control paths need governance.&lt;/p></description></item></channel></rss>