AI Governance

EU AI Act Phase 1 Is Live: Prohibited AI Practices You Need to Stop Today

Jillian Bommarito

As of February 2, 2025, the first real enforcement clock under the EU AI Act is running. Not the “we should probably get around to this” clock. The real one.

The European Commission has confirmed that the AI Act’s prohibited AI practices and AI literacy obligations now apply, and on February 4, 2025 it published its guidelines on prohibited practices to help stakeholders interpret the new rules. The law itself entered into force on August 1, 2024, but phase one is here now. If your team has been treating Article 5 like a future problem, that strategy has officially aged out.

The core point is simple: some AI uses are no longer merely risky, they are banned. The Commission’s own AI Act overview says the regulation prohibits eight practices, including harmful manipulation, social scoring, certain biometric systems, and real-time remote biometric identification in publicly accessible spaces for law enforcement. See the official summaries from the Commission and EUR-Lex for the underlying text and timeline: AI Act overview, Article 5 on EUR-Lex, and the Commission’s guidelines on prohibited practices.

So what actually has to stop today?

The Eight Practices That Are Now Off Limits

Article 5 is not subtle. It bans AI systems that:

  • use subliminal, purposefully manipulative, or deceptive techniques to materially distort behavior and cause significant harm
  • exploit vulnerabilities tied to age, disability, or a specific social or economic situation
  • perform social scoring
  • make criminal offense risk predictions based solely on profiling or personality traits
  • create or expand facial recognition databases through untargeted scraping of face images from the internet or CCTV
  • infer emotions in workplaces and educational institutions, except for limited medical or safety reasons
  • use biometric categorization to infer sensitive traits like race, political opinions, religion, sex life, or sexual orientation
  • use real-time remote biometric identification in publicly accessible spaces for law enforcement, except under narrow, legally controlled exceptions

That list is worth reading twice because the operational implications are broader than most teams expect.

Take social scoring. The AI Act does not just mean “don’t build a public China-style citizen score.” It also reaches AI systems that evaluate or classify people over time based on behavior or personality characteristics when the result is unfair, unrelated to the original context, or disproportionate to the behavior involved. If your product team is using an algorithm to create a trust score, a risk tier, or a “good user / bad user” label that follows someone across contexts, this is not a gray area. It is a compliance problem.

Or look at manipulative AI. The law is not saying that persuasion is illegal. Advertising exists. Sales exists. Humans remain irritatingly persuadable. The prohibition kicks in when the system uses subliminal or deceptive methods, or deliberately distorts behavior in a way that causes or is reasonably likely to cause significant harm. That is a much sharper knife. If your model is designed to nudge vulnerable users into decisions they would not otherwise make, the AI Act is not going to applaud the growth funnel.

The biometric provisions are just as serious. Untargeted scraping of facial images to build or expand face databases is out. Full stop. Biometric categorization that infers sensitive traits is out. And real-time remote biometric identification in public spaces for law enforcement is heavily restricted, with limited exceptions and procedural safeguards. If you are in surveillance, security, public sector contracting, or anything adjacent to identity tech, this is the section to read very slowly.

The Exceptions Are Narrow, Not Magical

A lot of teams will try to talk themselves into a loophole. They always do. The law is already ahead of that game.

For example, the prohibition on predicting criminal offense risk does not apply to systems that support a human assessment already based on objective and verifiable facts directly linked to criminal activity. That is not a free pass for magical crime forecasting. It is a narrow exception for assisting a legitimate human decision process.

Likewise, real-time remote biometric identification is not categorically impossible, but the exceptions are tightly confined to situations like locating specific victims of abduction or trafficking, preventing an imminent threat to life or physical safety, or identifying a person suspected of a serious criminal offense. Even then, the system must satisfy strict necessity, proportionality, authorization, and oversight requirements.

In plain English: the law is not saying “never use AI for safety.” It is saying “do not use AI to replace judgment, accountability, and rights protections with a rubber stamp.”

That is a distinction many vendors will discover only after a very expensive meeting.

What Companies Should Do Right Now

If you have anything in production, in pilot, or sitting in a slide deck with a procurement ticket attached, the next step is not philosophy. It is inventory.

Start with a prohibited practices gap analysis:

  • identify every AI system, model, and automated decision workflow in scope
  • map each use case against Article 5
  • check data sources, intended use, downstream use, and who actually controls deployment
  • flag anything touching biometrics, employment, education, identity, content moderation, consumer scoring, or surveillance
  • document where a use is clearly out of scope, clearly prohibited, or in a narrow exception bucket
  • stop launch plans for anything that cannot be cleanly defended

An EU AI Act readiness assessment earns its keep here. Not because compliance theater is fashionable, but because Article 5 is one of those rules that can turn a promising product into a legal liability with almost no warning. If you need to reconcile product reality with regulatory text, do it before the audit trail becomes the evidence trail.

The other immediate issue is AI literacy. The Act now expects providers and deployers to take measures so staff working with AI have a sufficient level of understanding. That means product, legal, procurement, security, HR, and leadership cannot keep pretending AI governance is someone else’s spreadsheet. It is cross-functional by design.

The Money Part

If the business case for compliance has been waiting on a stronger incentive, here it is: Article 99 sets administrative fines for non-compliance with Article 5 at up to EUR 35 million or 7% of worldwide annual turnover, whichever is higher. That is not a typo. That is the regulator saying, in effect, “we tried asking nicely.”

For most companies, that risk is enough to justify a real review. For larger firms, it should also trigger a conversation about board oversight, vendor management, and whether internal teams are using AI systems they cannot actually explain.

And yes, the Commission’s new guidelines are non-binding. That does not make them optional in any meaningful sense. They are the first official map for a law that now applies, and they tell you how the Commission is likely to read the prohibitions. Ignoring them would be a bold strategy. Not a good one, but certainly bold.

Bottom Line

The first phase of the EU AI Act is not coming. It is here.

If your organization uses AI in ways that touch social scoring, manipulation, biometrics, or predictive criminal risk, this is the week to stop, inventory, and test your assumptions. Waiting for a future enforcement wave is how companies end up explaining to regulators why the model was “just experimental” right up until it shipped.

That conversation tends to go poorly.

For teams that want a structured way to get ahead of this, a focused AI Governance & Compliance review with an EU AI Act readiness and prohibited practices gap analysis is the practical place to start. Because if Article 5 applies to your use case, the time to find out is now, not after the market surveillance authority does.

Related posts

Want to discuss this topic?

We'll give you a straight answer — not a sales pitch.