Explainable AI (XAI) has evolved rapidly from a research-focused endeavor to a critical requirement for deploying machine learning (ML) in regulated industries. As financial institutions and healthcare providers increasingly rely on ML models to drive decisions, questions around fairness, accountability, and transparency have reached new urgency. In 2020, with mounting regulatory scrutiny and a growing awareness of algorithmic bias, organizations are now compelled to move explainability from an academic curiosity to a production-grade necessity.

The Growing Importance of XAI in Regulated Sectors

Recent high-profile regulatory proposals such as the EU’s draft AI regulations (released for consultation in early 2020) and guidance from U.S. agencies including the FDA and CFPB, emphasize the importance of explainability for AI systems that affect human wellbeing or financial access. Organizations must ensure that model decisions can be interpreted not only by data scientists but also by regulators, auditors, and impacted stakeholders.

In the finance sector, XAI supports compliance with laws such as the Equal Credit Opportunity Act (ECOA) and Fair Lending Act. For example, when an AI model denies a loan, financial institutions are often required to provide an explanation of that decision. Similarly, in healthcare, black-box models must often be translated into actionable, understandable outputs for medical professionals and patients alike.

From Research Labs to Real-World Pipelines

Much of the early work in XAI emerged from academic settings and focused on explaining deep learning models through surrogate techniques like LIME (Local Interpretable Model-agnostic Explanations) and SHAP (SHapley Additive exPlanations). In June 2020, these tools are becoming standard components in enterprise ML pipelines.

A growing ecosystem of open-source libraries—including IBM’s AI Explainability 360 Toolkit, Microsoft’s InterpretML, and Google’s What-If Tool—are designed to bring explainability into real-world production systems. These tools help practitioners understand how inputs influence outputs, identify features driving predictions, and assess fairness across demographic groups.

Challenges to Operationalizing Explainability

Despite the momentum, several challenges remain in operationalizing XAI:

Trade-off with Model Performance: Simpler, more interpretable models like decision trees often underperform compared to deep learning. Balancing accuracy with transparency requires careful engineering and stakeholder collaboration.

Audience-Appropriate Explanations: Different audiences—regulators, developers, end-users—require different levels of detail. Building adaptive explanations that cater to these personas is an open challenge.

Standardization and Benchmarks: There’s a lack of industry-wide benchmarks or standards for evaluating the quality of explanations. This impedes consistent regulatory interpretation and comparison.

Case Studies from Mid-2020

Credit Risk Modeling: Capital One and JPMorgan Chase are among the financial institutions investing in explainable ML frameworks. At the time, JPMorgan had begun using SHAP values in production credit risk models to meet internal compliance audits and provide traceability.

COVID-19 Triage Tools: Hospitals experimenting with AI triage tools during the pandemic faced immediate pressure to ensure model outputs could be explained to physicians. Research from Mount Sinai Hospital (NYC) and others highlighted the need for transparency in AI-assisted diagnostics.

AI Regulation Initiatives: The U.S. Department of Health and Human Services’ Office for Civil Rights began evaluating AI transparency in clinical decision support tools, signaling a shift in expectations around explainability in patient-facing technologies.

Looking Ahead

To navigate evolving regulatory expectations and build public trust, XAI will need to be integrated not just at the model level, but throughout the ML lifecycle—from dataset curation to post-deployment monitoring. This integration demands new tooling, training, and cross-functional collaboration between compliance, engineering, and executive leadership.

For organizations in finance and health, the mandate is clear: if AI is to be a decision-maker, it must also be an explainer.

Next in the Series

July 2020: AutoML Matures: Democratizing Model Development Without Sacrificing Control

References

[1] Gianfrancesco, M.A., Tamang, S., Yazdany, J., & Schmajuk, G. (2020). Potential Biases in Machine Learning Algorithms Using Electronic Health Record Data. JAMA Internal Medicine, 180(11), 1546–1551. https://doi.org/10.1001/jamainternmed.2020.2730