Explainable Cognitive Multi-Agent AI for Joint Intention Modeling in Complex Task Planning
Abstract
This study addresses the problems of unstable cooperative behaviors, insufficient cognitive structure, and limited interpretability in multi agent systems under complex task conditions and proposes an explainable cognitive planning framework. The framework constructs multi level cognitive representations that map environmental states, historical interactions, and local observations into internal cognitive embeddings and builds a joint intention representation to capture cross agent cooperation and task dependencies. A consistency alignment mechanism is introduced at the planning layer to ensure that high level cognitive goals impose structured constraints across agents and lead to more coordinated low level action strategies. To enhance system transparency, an interpretability module is integrated to analyze causal chains and show how cognitive factors contribute to decision generation, forming a complete interpretable path from cognitive modeling to policy planning and behavior execution. Experimental results show that the method outperforms existing approaches in task success rate, long term return, coordination efficiency, and explanation fidelity and maintains strong stability and robustness under varying data scales, environmental disturbances, and task distribution shifts. The study verifies the essential role of cognitive structure and interpretability in multi agent cooperation and provides a unified perspective for integrating cognition and planning in intelligent systems operating in complex environments.