AI Governance

EU AI Act Formally Adopted: The Countdown to Compliance Begins

Jillian Bommarito

By 523 votes to 46, with 49 abstentions, the European Parliament formally adopts the EU AI Act. That sounds like legislative housekeeping. It is not. It is the moment the EU stops debating whether AI should be governed and starts counting down to enforcement.

If you build, buy, deploy, or embed AI in products sold into Europe, this is your signal. The age of “we have a policy, therefore we are fine” is over. A PDF on a website is not a strategy. A one-slide ethics statement is not a control environment. And a vague promise that “the model will be monitored” is not going to impress anyone who has to read the Act, much less enforce it.

What Just Happened?

The AI Act is a risk-based regulatory framework. That matters, because it is not treating every chatbot, classifier, or foundation model the same way. The EU is drawing lines between:

  • Unacceptable-risk uses, which are banned
  • High-risk systems, which are permitted but heavily regulated
  • Limited-risk systems, which face transparency obligations
  • Minimal-risk systems, which largely stay out of the crosshairs

That is the right conceptual model. Regulation should not treat a spam filter like a hiring model, and it should not treat a toy image generator like a biometric surveillance system. The EU has finally said the quiet part out loud: some AI uses are simply too dangerous to be left to “trust us, we’re being careful.”

The headline here is not just that Parliament approves the law. It is that the law now has a real compliance clock attached to it. The rules do not sit in a theoretical drawer. Once the Act enters into force after publication, the deadlines start stacking up. Six months for banned practices. Nine months for codes of practice. Twelve months for general-purpose AI rules and governance. Thirty-six months for high-risk systems.

That is not an abstract timeline. That is a project plan.

The Part Everyone Should Actually Read

The most obvious mistake companies make is assuming the EU AI Act is only about “AI companies.” It is not. If AI is in your product, your internal workflow, your vendor stack, or your customer-facing service, you have exposure.

Need a quick test? Ask three questions:

  • Does the system make or influence decisions about people?
  • Does it touch hiring, credit, education, health, access to services, law enforcement, or other sensitive domains?
  • Does it use third-party models, training data, or vendor tools that you do not fully control?

If the answer to any of those is yes, you are no longer in the land of casual experimentation. You are in the land of documentation, risk classification, governance, and evidence.

And yes, general-purpose AI is now part of the regime. That means upstream model providers are not magically exempt because they are “just infrastructure.” The fact that a model is flexible does not mean it is invisible. OpenAI, Anthropic, Google, Mistral, Meta, and the rest of the ecosystem all have downstream customers who will need answers. The model may be general-purpose; the liability is not.

Why the First Deadlines Matter More Than the Last Ones

The temptation, of course, is to look at the biggest deadline and think, “We have time.” That is how companies end up sprinting at the last minute while legal, product, data science, procurement, and security argue over whose job it was to inventory the model that nobody remembered existed.

The early deadlines matter because they force the hard work first.

The banned-use provisions will force teams to identify systems that are simply off-limits. The general-purpose AI rules will force providers and deployers to document capabilities, limitations, and responsibilities. The high-risk provisions will force actual governance: data quality, human oversight, technical documentation, logging, testing, monitoring, and accountability.

In other words, the Act punishes ambiguity. Which is fair, because ambiguity is where bad compliance goes to breed.

And the penalties are not decorative. The Commission’s summary of the Act lays out fines that can reach €35 million or 7% of global turnover for the most serious violations. That changes the expected value of “we’ll fix it later.” Suddenly, later is expensive. Very expensive.

What Companies Need to Do Now

The right response is not panic. It is structure.

Start with an AI Act readiness assessment. Inventory the systems, map the use cases, classify the risk tier, and identify which obligations attach to which product, business unit, or vendor relationship. If your organization does not know where AI is being used, that is not a governance problem. It is a discovery problem. Fix the discovery problem first.

Then build a compliance roadmap that ties the legal requirements to owners, controls, evidence, and dates. If nobody is assigned to document training data provenance, monitor outputs, or review vendor commitments, then the roadmap is fake. It is a decorative document. Regulators are not generally moved by decorative documents.

A useful roadmap usually includes:

  • System inventory and use-case mapping
  • Risk classification by product and deployment context
  • Vendor and model diligence for third-party AI tools
  • Training data review and provenance analysis
  • Policy updates that reflect actual engineering practice
  • Board education so oversight is real, not ceremonial
  • Testing and logging requirements built into product and security workflows

This is also where teams discover that their privacy, security, and AI governance programs need to talk to each other. GDPR is not AI Act compliance. SOC 2 is not AI Act compliance. ISO 27001 is not AI Act compliance. They help. They are not enough. The overlap is useful, but the regimes are not interchangeable.

For many organizations, this is the point where a focused AI governance and compliance sprint pays for itself. Not because compliance is glamorous, but because the alternative is learning your gaps from a regulator, a customer due diligence questionnaire, or a very expensive contract dispute.

The Real Strategic Point

There is a broader lesson here. The EU is not just regulating AI. It is setting the operating assumptions for the market.

That means procurement teams will start asking for AI assurances. Sales teams will need better answers on model governance. Boards will need to understand where the company is exposed. Investors will ask whether the product can survive an audit, not just whether it can survive a demo.

This is familiar territory for anyone who has lived through privacy, cybersecurity, or software supply-chain compliance. The companies that get ahead are not the ones with the loudest AI slogan. They are the ones that can explain what the system does, what data it uses, what risks it creates, and who is responsible when something goes wrong.

That is boring in the best possible way.

Because boring compliance beats dramatic cleanup. Every time.

The EU AI Act is not the end of AI innovation. It is the end of pretending that innovation and accountability are mutually exclusive. For companies that want to keep shipping into Europe, the message is simple: inventory the systems, assign the owners, document the risk, and get the roadmap moving now.

Related posts

Want to discuss this topic?

We'll give you a straight answer — not a sales pitch.