Explainable AI for Credit Risk Scoring on Loan Platforms

Abstract
This study proposes an explainable machine learning framework for credit risk assessment in U.S. peer-to-peer lending platforms. By combining XGBoost with SHAP (SHapley Additive exPlanations), the model delivers high predictive accuracy while providing transparent, individualized explanations that align with regulatory requirements under the Equal Credit Opportunity Act (ECOA) and the Fair Credit Reporting Act (FCRA). Using real LendingClub data, we demonstrate that SHAP identifies key risk factors such as loan grade, interest rate, and debt-to-income ratio, and provides localized insights into decision rationales. Extensive experiments show that the proposed model outperforms traditional baselines in classification performance and explanation fidelity. Fairness evaluation reveals subgroup-level variation in feature importance, emphasizing the need for regular bias audits. The findings underscore the feasibility of deploying interpretable and compliant AI systems in consumer lending, offering actionable insights for regulators, developers, and credit analysts.