TacoMAS co-evolves LLM agent topology and skills during inference, gains 13.3% over baselines
Researchers introduce TacoMAS, a multi-agent system that jointly adapts communication topology and agent capabilities at test time, achieving 13.3% average improvement over baselines.

TacoMAS, a test-time co-evolution framework for LLM-based multi-agent systems, jointly adapts both communication topology and individual agent capabilities during inference. The framework treats multi-agent inference as an online graph adaptation problem, where nodes represent agents with role-specific expertise and edges define how they communicate. Unlike prior work that freezes topology at inference time or adapts only one dimension, TacoMAS updates both axes on different time scales: a fast capability loop refines agent expertise using trajectory-level feedback, while a slower meta-LLM-driven topology loop performs structural changes including edge edits, agent additions, and agent removals.
The dual-loop design drives the system toward a task-conditioned stable equilibrium. The fast loop handles emerging subtasks by rapidly updating agent capabilities, while the slow loop preserves coordination stability by evolving the communication structure more gradually. The researchers provide both empirical and theoretical evidence that effective test-time evolution requires this joint adaptation rather than optimizing topology or capability in isolation.
Tested on four benchmarks against nearly 20 multi-agent baselines, TacoMAS achieved an average improvement of 13.3 percent over the strongest baseline. Code is available on GitHub at chenxu2-gif/TacoMAS-MultiAgent.