Skip to main navigation menu Skip to main content Skip to site footer

Structured Knowledge Integration and Memory Modeling in Large Language Systems

Abstract

This study addresses the limitations of current large language models in long-term dependency retention and structured knowledge modeling. A fine-tuning algorithm is proposed, integrating memory networks and perception graph mechanisms, to improve overall performance in multi-hop reasoning and complex semantic understanding tasks. The method introduces a dynamic, readable and writable external memory module. It effectively stores and retrieves historical semantic information, alleviating the forgetting issue in long-text processing. At the same time, a perception graph is constructed to represent multi-dimensional relations among entities. A graph neural network is used to encode the graph structure, enabling deep integration between structured knowledge and the semantic space of the language model. Experiments are conducted on the HotpotQA dataset, covering samples with varying reasoning difficulties. Results show that the enhanced model outperforms the baseline in F1 score, semantic consistency, and reasoning stability. This confirms the effectiveness of the proposed fusion mechanism in complex language tasks. Further comparative experiments examine the impact of different graph neural network architectures on graph encoding performance. The results highlight the critical role of model architecture in the fusion mechanism. This study provides a technical approach to enhance knowledge retention and multi-level semantic modeling in large language models.

pdf