Long Text Classification with Large Language Models via Dynamic Memory and Compression Mechanisms
Abstract
In recent years, long text classification has posed significant challenges due to semantic redundancy, input length limitations, and difficulties in capturing global dependencies. This paper proposes a novel framework that integrates dynamic memory and compression mechanisms into large language models to address these issues. The approach introduces a dynamic memory unit that selectively stores and updates essential information during training, while a low-rank compression module reduces redundancy and computational overhead without sacrificing semantic integrity. To verify the effectiveness of the proposed method, multiple experiments were conducted, including comparative evaluations, hyperparameter sensitivity analysis, environment sensitivity analysis, and data sensitivity analysis. The results demonstrate that the model achieves superior performance compared to existing baselines, highlighting its adaptability and robustness under different conditions. In particular, the framework effectively balances local and global semantic representations in long texts, ensuring both efficiency and accuracy. Moreover, the analysis of influencing factors such as learning rate, hidden dimension size, sample scale, and noise ratio provides systematic insights into the internal behavior of the model. These findings confirm that the proposed design not only improves classification performance but also enhances stability when handling large-scale and complex text data, thereby offering a reliable solution for long text classification tasks.