AI Governance

The EU AI Act Is Now in Force: Your Timeline Starts Today

Jillian Bommarito

On August 1, 2024, the EU AI Act officially entered into force. That is not a ceremonial footnote. It is the sound of the clock starting.

For anyone building, buying, deploying, or selling AI into the European market, Regulation (EU) 2024/1689 is no longer a policy debate, a draft, or a “we should keep an eye on this.” It is law. And like most things that become law, it arrives with deadlines, exceptions, and enough moving parts to keep compliance teams, product teams, and outside counsel equally entertained.

The basic message is simple: the Act is in force now, but it does not become fully applicable all at once. The EU chose a phased rollout. That means the easy instinct, which is to say “we have time,” is also the dangerous instinct.

The Clock Starts Now

The AI Act is built around risk. Not all AI is treated the same, because not all AI creates the same harm.

That sounds reasonable until you realize how many companies have spent the last two years duct-taping models into products and calling the result “innovation.” If your system touches hiring, education, credit, biometrics, customer service, surveillance, or any other workflow where people can be harmed, this is not a theoretical exercise.

The law also has reach. It applies to public and private actors inside and outside the EU if they place an AI system or general-purpose AI model on the EU market, put one into service, or use one in the EU. So no, this is not just a Brussels problem. It is a global vendor problem, a product problem, a procurement problem, and yes, a board problem.

And the board should care. A regime with fines up to €35 million or 7% of global annual turnover, whichever is higher, has a funny way of making “we’ll figure it out later” look less like strategy and more like improv.

The First Deadline Is Not 2026

A lot of people are looking at 2026 and mentally checking out. That is a mistake.

The AI Act’s full application is still down the road, but the first substantive obligations arrive much sooner. The key dates are:

  • 2 February 2025: prohibitions, definitions, and AI literacy provisions become applicable
  • 2 August 2025: governance rules and obligations for general-purpose AI models become applicable
  • 2 August 2026: the main body of the Act becomes fully applicable
  • 2 August 2027: certain high-risk AI obligations tied to regulated products take effect later

That first six-month deadline matters most right now. If your organization is close to anything that might fall into a prohibited practice category, you do not have the luxury of waiting for a polished internal memo and a motivational lunch-and-learn.

You need to know now whether your use case is safe, risky, or flat-out banned.

What Counts As a Problem

The Act’s prohibited practices are not vague “be careful” language. They are a list of things the EU decided are too harmful to tolerate.

Among the categories drawing the most attention:

  • Subliminal, manipulative, or deceptive techniques that materially distort behavior and cause harm
  • Exploitation of vulnerabilities based on age, disability, or social or economic situation
  • Social scoring that leads to unfair treatment
  • Certain forms of biometric categorization
  • Untargeted scraping of facial images from the internet or CCTV to build facial recognition databases
  • Emotion recognition in places like the workplace and educational settings, with limited exceptions
  • Certain uses of AI for criminal offense risk prediction based solely on profiling or personality traits
  • Restricted uses of real-time remote biometric identification in publicly accessible spaces for law enforcement

That list should make a few product teams nervous, which is probably healthy.

A company does not need to be doing something cartoonishly evil to get into trouble. Sometimes the issue is simpler and more annoying: a vendor contract that never got reviewed, a use case that evolved faster than governance, a model that started as “internal support” and ended up making decisions about people.

That is how compliance failures happen. Not with a villain monologue. With an unchecked deployment.

What Companies Should Do This Quarter

If you are shipping AI into Europe, the right response is not panic. It is inventory.

Start with the basics:

  • Identify every AI system, model, and embedded tool in use
  • Map each use case to a real business process, not marketing language
  • Determine whether the use case could fall into a prohibited or high-risk category
  • Review vendor contracts for AI-specific obligations, audit rights, incident reporting, and data provenance
  • Check whether your training data, model outputs, and downstream usage create copyright, privacy, or security problems
  • Document who owns each system internally and who is responsible when it misbehaves
  • Train leadership on what the Act means in practice, not just what the acronym stands for

This is where EU AI Act readiness assessments earn their keep. The goal is not to produce a binder that impresses no one and expires in six weeks. The goal is to answer a harder question: where are we exposed, and what would it take to defend the answer if a regulator asked tomorrow?

For a lot of companies, that means combining legal analysis with technical review, privacy mapping, and data governance. In other words, the work is cross-functional because the risk is cross-functional. Shocking, I know.

Why This Is A Governance Problem

The AI Act is not just a law about models. It is a law about decision-making.

That is why AI governance, privacy, security, procurement, and board oversight all show up in the same conversation. If your company is using AI to summarize customer complaints, screen candidates, triage claims, generate recommendations, or assess risk, then the question is not whether the system is “cool.” The question is whether it is lawful, explainable enough, monitored, and bounded by human accountability.

If you are a PE or VC-backed company, this also affects diligence. Buyers and investors increasingly want to know whether a target has an AI footprint that is compliant, documented, and commercially durable. A flashy demo is nice. A defensible operating model is better.

And if you are building with general-purpose models from OpenAI, Anthropic, Google, Meta, or another provider, you still need to understand your own obligations. Using a model does not outsource responsibility. It just adds another contract to review and another set of assumptions to validate.

Don’t Wait For February

The easiest mistake now is to treat the AI Act like a 2026 problem. It is not.

The first bans become enforceable in six months, which means the next question for most companies is brutally practical: Can we prove that none of our current use cases cross the line? If the answer is “not yet,” then the answer is “we have work to do.”

That work is rarely glamorous. It looks like inventories, assessments, contracts, training, controls, and documentation. It looks like saying no to a feature that was going to be “fine, probably.” It looks like boring excellence.

Which is usually how serious compliance gets done.

If your organization needs help separating the lawful AI stack from the bad ideas, this is the moment to do it. Otherwise, the market gets to enjoy a very expensive lesson in what happens when the timeline starts, and everyone keeps pretending it didn’t.

Related posts

Want to discuss this topic?

We'll give you a straight answer — not a sales pitch.