The EU AI Act’s phased rollout has moved from abstract regulatory threat to concrete compliance work. The first set of prohibitions took effect in February 2025. General-purpose AI obligations land in August. High-risk systems face their own timelines running into 2026 and 2027. For enterprises operating in or selling into the EU — and for their vendors — 2025 is the year “we should probably look at this” became “we need a program.”

What that program actually looks like, and how it extends beyond the EU, is the question consuming most serious compliance conversations this spring.

The Scope Question Nobody Loves

The first uncomfortable discovery for most enterprises is that the Act’s scope is broader than initial coverage suggested. It applies extraterritorially: a US company whose AI system affects users in the EU is in scope, regardless of where the model is hosted. It covers providers, deployers, importers, and distributors with overlapping but distinct obligations. And the definition of “AI system” is deliberately expansive — it captures classical ML systems, not just generative models.

The practical implication: most mid-to-large enterprises find themselves in scope for at least some systems, even if they don’t consider themselves “AI companies.” Getting clear on which systems are in scope, and in what role, is the first serious compliance exercise.

Risk Classification Drives Everything

The Act’s core mechanism is risk-based: prohibited practices, high-risk systems, limited-risk systems with transparency obligations, and minimal-risk systems with essentially none. Most enterprise effort is concentrated on two categories.

Prohibited practices — social scoring, certain biometric categorization, manipulative techniques targeting vulnerabilities — are not negotiable. Compliance here is largely an audit: confirm you’re not doing these things, document the review, and move on.

High-risk systems are where the real work lives. If your system is used in employment decisions, credit scoring, critical infrastructure, education, law enforcement, or a handful of other listed domains, it faces substantive requirements: risk management systems, data governance, technical documentation, record-keeping, transparency to users, human oversight, accuracy and robustness testing, and conformity assessment before market placement. This is a meaningful engineering and operational lift.

What Compliance Programs Actually Look Like

The emerging reference pattern for enterprise AI compliance in 2025 has five layers.

A system inventory that captures every AI system in use or in development, with risk classification, data sources, intended use, and responsible owner. Most enterprises discover on first pass that they have far more AI systems than anyone realized — ML models embedded in procured SaaS, internal scoring tools built by business units, recently added GenAI features in vendor products.

A data governance layer documenting training data provenance, quality processes, and bias evaluation. The “where did this data come from, and what’s our basis for using it” question is now a first-class compliance artifact.

A model documentation standard — often structured as model cards or system cards — describing capabilities, limitations, evaluation results, and intended use. The Act requires specific technical documentation for high-risk systems; most mature programs extend this format to all consequential systems.

A post-deployment monitoring capability to detect drift, quality regressions, and emerging harms, with defined thresholds for retraining or decommissioning. This is where MLOps maturity and compliance intersect.

A human oversight and incident response framework, with clear escalation paths when a system behaves unexpectedly and serious-incident reporting obligations under the Act.

The NIST and ISO Convergence

One welcome trend: the major frameworks are converging. The NIST AI Risk Management Framework, the ISO/IEC 42001 management system standard, and the EU AI Act share more structural DNA than their respective audiences often realize. Enterprises building a single, well-designed AI governance program can reasonably claim substantial coverage against all three. The teams struggling are those building separate, parallel programs per jurisdiction.

For multinationals, the practical move is usually to adopt ISO/IEC 42001 as the operational backbone, map EU AI Act-specific requirements onto it, and use the NIST framework for risk-management practices inside the management system.

Vendor and Procurement Implications

The compliance obligations flow down the supply chain. Enterprises that deploy third-party AI systems are responsible for ensuring those systems meet the requirements for their intended use. This has reshaped procurement: AI-specific addenda, model documentation requirements, evaluation evidence, and audit rights are now standard in enterprise AI contracts.

Vendors are responding — the best ones now ship detailed model cards, evaluation reports, and compliance documentation as part of the standard package. The ones that respond to AI due diligence questionnaires with “our model is trained on diverse high-quality data” are losing deals.

What’s Still Unclear

Plenty remains unsettled. The exact scope of the general-purpose AI obligations, the criteria for systemic-risk classification of frontier models, the practical interpretation of “human oversight” for high-volume automated decisions — all are being worked out through guidance documents, enforcement actions, and industry practice through 2025 and 2026. Compliance programs need to be designed for evolution, not frozen against a static reading of the Act.

Conclusion

The EU AI Act era is finally here, and it’s less dramatic and more grinding than the hype suggested. Prohibited practices are a one-time review. High-risk compliance is a sustained operational investment. General-purpose AI obligations are driving new transparency norms across the industry. The enterprises handling this well in 2025 are treating AI governance as a capability to build — inventory, documentation, monitoring, oversight — rather than a document to write. Those treating it as a legal matter to be handled by policy alone are behind, and often don’t yet know it.

References

European Union (2024). Regulation (EU) 2024/1689 (EU AI Act). Official Journal of the European Union.

NIST (2023). AI Risk Management Framework 1.0 and (2024) Generative AI Profile.

ISO/IEC (2023). ISO/IEC 42001:2023 — Artificial Intelligence Management System.

European Commission (2025). Guidelines on Prohibited AI Practices under the AI Act.

Future of Life Institute (2024). EU AI Act Compliance Checker.