Structural Regularization and Bias Mitigation in Low-Rank Fine-Tuning of LLMs
Abstract
This paper proposes an efficient fine-tuning algorithm that integrates low-rank structures with a bias-aware mechanism to address structural redundancy and semantic bias in large language models. The method freezes the original model parameters and injects trainable low-rank matrices, combined with semantic bias embeddings and structural alignment regularization, to identify and suppress potential bias in the representation space. It introduces a multi-dimensional loss function to constrain the impact of bias, maintain generation consistency, and enhance structural stability in multitask shared representations. The experimental design includes diverse test scenarios involving task perturbation, noise injection, and changes in sampling frequency, systematically evaluating semantic stability, bias detection, and generalization performance. Results show that the proposed method significantly improves bias perception and output fairness while maintaining parameter efficiency, outperforming existing low-rank fine-tuning approaches across multiple metrics. This study establishes a unified optimization pathway for task adaptation and bias control from both structural and semantic perspectives, enhancing the stability and adaptability of large language models in complex environments.