Skip to main navigation menu Skip to main content Skip to site footer

Task-Aware Differential Privacy and Modular Structural Perturbation for Secure Fine-Tuning of Large Language Models

Abstract

This paper addresses the risk of privacy leakage during the fine-tuning of large language models in sensitive scenarios by proposing a differential privacy mechanism that integrates task-aware perturbation and modular structural injection. The mechanism consists of two components: Task-aware Differentially Private Fine-tuning (TDPF) and Modular Privacy-aware Injection (MPI). TDPF dynamically adjusts the intensity of gradient perturbation based on semantic sensitivity scoring, guiding the model to adaptively optimize its update path under differential privacy constraints. MPI injects structured noise into key substructures of the model and uses modulation factors to precisely control the perturbation intensity across different modules, thereby enhancing semantic consistency while maintaining structural stability. A series of systematic experiments is conducted to evaluate the proposed method across multiple dimensions, including privacy budget sensitivity, injection frequency, and modulation strength. The results show that the method significantly improves multi-task adaptability and semantic representation integrity while maintaining privacy budget efficiency. It effectively alleviates the performance-structure conflict present in traditional differential privacy strategies, demonstrating advantages in structural friendliness, controllable performance, and robust privacy protection.

pdf