Skip to main navigation menu Skip to main content Skip to site footer

LLM Retrieval-Augmented Generation with Compositional Prompts and Confidence Calibration

Abstract

This study investigates the reliability of Retrieval‑Augmented Generation (RAG) in complex and dynamic knowledge settings, and proposes a compositional retrieval-prompting framework with gated knowledge injection and layered confidence calibration. The method first performs semantic parsing and prompt decomposition to transform complex queries into structured expressions composed of sub-intents and logical operators, providing a clear planning path for the retrieval stage. In the knowledge injection process, gating and filtering mechanisms are introduced to effectively suppress noisy fragments and enhance the relevance and controllability of evidence, allowing retrieval results to align more accurately with the generation model. During generation, the model applies multi-granularity evidence fusion strategies to optimize answers and measures uncertainty on both the retrieval and generation sides within a layered confidence calibration framework, ensuring traceability and consistency of outputs. Systematic experiments on hyperparameter sensitivity, environmental constraints, and data transfer show that the framework demonstrates strong robustness and stability across different scenarios, significantly improving answer accuracy, factual consistency, and attribution ability under complex task conditions, thereby providing an effective solution for knowledge-intensive generation tasks.

pdf