As artificial intelligence (AI) systems become more deeply embedded in industries such as finance, healthcare, insurance, and criminal justice, the call for explainability is growing louder. These regulated sectors are not only risk-averse—they are legally obligated to understand and justify decisions, especially when those decisions have a direct impact on people’s lives. October 2019 marks a pivotal moment in this dialogue, as industry leaders, regulators, and developers begin grappling with a critical question: How do we build AI systems that are both powerful and explainable?

The Imperative for Explainability

At the core of regulated industries lies a principle of accountability. If a bank denies a loan, a hospital recommends a treatment, or a government flags a citizen for review, someone must be able to explain why. Traditional software systems followed predictable logic trees, but modern AI models—particularly deep learning systems—often operate as “black boxes” whose internal reasoning is inscrutable even to their creators.

The stakes are high. Consider the 2018 incident where a healthcare algorithm demonstrated racial bias in predicting which patients needed extra care. Or the growing concern over AI used in pretrial risk assessments, where lack of transparency has led to legal challenges. Explainability isn’t just a best practice—it’s a regulatory and ethical necessity.

The Regulatory Landscape

In 2019, regulators began sharpening their focus on AI transparency:

European Union: The General Data Protection Regulation (GDPR) includes a so-called “right to explanation,” compelling organizations to offer insight into algorithmic decisions.

United States: The Federal Trade Commission (FTC) and sector-specific regulators such as the FDA and OCC have issued guidance on fairness and accountability in automated systems.

Industry Frameworks: Groups like IEEE and ISO are working on global standards for algorithmic transparency and risk management.

These developments signaled to enterprises that proactive investment in explainable AI (XAI) wasn’t optional—it was imminent.

Approaches to Explainability

While the term “explainable AI” covers a broad set of techniques, they generally fall into two camps:

Intrinsic Interpretability: Some models are inherently easier to understand. Linear regressions, decision trees, and rule-based systems offer transparency by design. The tradeoff? They often underperform compared to more complex models.

Post-Hoc Explainability: For complex models like neural networks, tools like LIME (Local Interpretable Model-Agnostic Explanations) and SHAP (SHapley Additive exPlanations) offer insights into predictions. These techniques generate approximations of a model’s behavior that humans can interpret.

Other promising methods emerging in 2019 included counterfactual explanations, attention mechanisms in NLP, and model visualization tools. While none offer perfect transparency, they represent crucial steps forward.

Challenges in Implementation

Despite growing interest, many enterprises faced practical barriers to XAI:

Technical Limitations: High-performing models (e.g., ensemble models, deep neural networks) are often too complex to render transparent without losing predictive power.

Lack of Standards: In 2019, organizations were still grappling with fragmented definitions and inconsistent metrics for explainability.

Organizational Silos: Data scientists, legal teams, compliance officers, and business units often spoke different languages, making it hard to implement cohesive AI governance.

Tool Immaturity: Open-source libraries for XAI were just beginning to mature, and most were designed with data scientists in mind—not auditors or executives.

Best Practices for Building Explainable AI

To meet regulatory expectations and build trust with users, organizations in 2019 began embracing several best practices:

Model Choice Matters: Start with interpretable models when possible, especially in high-stakes use cases.

Document Everything: Maintain thorough model documentation including assumptions, training data provenance, and known biases.

Build with the End User in Mind: Consider who needs to understand the model (e.g., customers, regulators, internal reviewers) and tailor explanations accordingly.

Audit Early and Often: Regularly assess models for fairness, bias, and performance drift.

Cross-Functional Collaboration: Create governance teams that bridge technical, legal, and ethical perspectives.

Looking Ahead

By late 2019, the writing was on the wall: AI explainability would be a foundational requirement for enterprise adoption. The trend lines pointed toward increased regulation, growing consumer awareness, and new technical innovations. While full transparency remained elusive—particularly for deep learning models—the urgency to build trustworthy AI was clearly gaining momentum.

Organizations that began laying the groundwork in 2019 were setting themselves up not just for compliance, but for leadership in an AI-powered future.

Conclusion

Explainability is not a technical checkbox—it is a business necessity. Especially in regulated industries, transparency is the linchpin of accountability, trust, and long-term viability. As we reflect on October 2019, it’s clear this was a turning point in how we think about responsible AI. The challenge now is to turn that thinking into action.

Next in the Series
November 2019: Hybrid Cloud Strategy: Balancing Flexibility, Control, and Cost