Explainable artificial intelligence
Explainable artificial intelligence (XAI) refers to methods and practices that make the behavior and outputs of AI systems understandable to humans. It encompasses inherently interpretable models and post‑hoc explanation techniques for complex models, and is closely linked to trust, accountability, safety, and regulatory compliance. Public agencies and standards bodies have issued principles and requirements for explainability in high‑stakes applications such as credit, healthcare, and public services.