Model BreakdownsUnited States
OpenAgentArk: Turning Multi-Agent Debate into Single-Agent Capabilities via Hierarchical Distillation
AgentArk distills multi-agent debate into a single LLM via three hierarchical distillation strategies, shifting computation to training to cut inference cost while preserving reasoning.