On June 14, the European Parliament does something that a lot of companies have been pretending could wait: it approves its negotiating position on the EU AI Act by a wide margin, 499 votes to 28, with 93 abstentions.
That is not a small procedural footnote. It is a signal. Europe is not merely talking about AI governance; it is writing the rulebook while the rest of the market is still arguing over definitions, product demos, and whether a chatbot counts as “strategy.”
And yes, the AI Act is being described by Parliament as the world’s first comprehensive AI law. That matters, because the first serious law in a category tends to become the reference point for everyone else. Sometimes that reference point is helpful. Sometimes it is a headache. Usually it is both.
What Parliament Just Put On The Table
The Parliament’s position is built around a risk-based framework. That is the core idea, and it is not especially subtle: the more dangerous the AI use case, the more the compliance burden.
At a high level, Parliament draws four buckets:
- Unacceptable risk: banned outright
- High risk: allowed, but tightly controlled
- Transparency risk: disclosures and notices required
- Minimal risk: mostly left alone
That sounds tidy until you realize how many common products live closer to the border than their vendors would like to admit.
The Parliament’s position would ban AI applications such as:
- social scoring
- certain biometric identification and categorization systems
- manipulative systems aimed at vulnerable users
- real-time remote biometric identification in public spaces, with narrow law-enforcement exceptions
It also treats a wide range of systems as high risk, including AI used in:
- critical infrastructure
- education and vocational training
- employment and worker management
- access to essential services
- law enforcement
- migration, asylum, and border control
- assistance in legal interpretation and application
And then there is generative AI. No, ChatGPT is not automatically thrown into the highest-risk bucket. But Parliament wants transparency obligations, including disclosure that content was generated by AI and summaries of copyrighted training data. If that sounds familiar, it should. The era of “we trained it on the internet, trust us” is starting to look expensive.
Why U.S. Companies Should Care
If you are a U.S. company and your first instinct is, “We are not based in Europe, so this is not really our problem,” that is a very human reaction. It is also how companies wind up discovering regulatory obligations after the sales team has already promised the moon.
The AI Act matters to U.S. companies for a simple reason: Europe is a market, not a mood. If your product is used in the EU, sold into the EU, embedded by an EU customer, or influences people in Europe, you need to care.
And even if your revenue is nowhere near Europe today, customers, investors, acquirers, and enterprise procurement teams are going to start asking questions that sound suspiciously like compliance questions:
- What AI systems do you use?
- What data trained them?
- Are any of them high risk?
- Do you have human oversight?
- Can you explain outputs to regulators?
- Can you prove the model was tested?
- Do your vendors give you anything other than hand-waving?
That last one is especially important. A lot of AI risk is not created by the model you built. It is created by the model you bought, the API you stitched into production, or the vendor contract you signed after one meeting and three espresso shots.
The Parliament vote also changes the expected value of “we’ll deal with this later.” That strategy was already shaky. Now it is getting worse by the week.
Start With An AI Inventory, Not A Press Release
The right response is not panic. It is inventory.
Before you can decide whether something is high risk, low risk, or just a bad idea wearing a hoodie, you need to know what AI you actually have. That means mapping:
- internal tools
- customer-facing systems
- vendor-provided AI features
- model fine-tuning and retraining workflows
- training data sources
- output uses
- human override points
- jurisdictions where the system is deployed
If that sounds tedious, good. Compliance is often tedious. That is the point. The regulators are not paying you for vibes.
This is also where an EU AI Act readiness assessment earns its keep. A real gap analysis can tell you which systems are likely to trigger high-risk obligations, which products need transparency language, where your documentation is thin, and where your data provenance is about to become a problem. In other words, it helps you find the landmines before your customer, auditor, or deal counsel does.
The Practical Questions To Ask Now
If I were advising a U.S. company on June 16, 2023, I would start with these questions:
-
Do we know where our AI is?
If the answer is “somewhere in a product roadmap deck,” you are behind. -
Can we classify each use case by risk?
Not every AI feature is high risk, but some absolutely are. Employment screening, credit-like decisioning, biometric use, and public-sector workflows deserve special attention. -
Can we explain the data?
Training data provenance, copyright exposure, and retention practices are no longer side issues. -
Do we have human oversight in practice, not just in policy?
A policy that says “a human reviews things” is not the same as an operational control. -
Can we document testing, monitoring, and incident response?
If the model goes sideways, what happens? Who knows? Who writes it down? -
Have we trained the board and leadership team?
Board AI education is going to matter. Directors do not need to become machine learning engineers, but they do need to understand the difference between a product feature and a governance liability.
This Is Not Just A Legal Problem
That is the part people miss. The AI Act is not only a legal story. It is a product, data, security, and governance story.
If your AI system is high risk, then your technical controls matter. Your documentation matters. Your vendor management matters. Your security posture matters. Your privacy program matters. Your development process matters. Your board reporting matters.
If you are treating AI governance as “let legal handle it,” you are already under-scoping the problem.
The better approach is cross-functional:
- legal and compliance for interpretation
- engineering for implementation
- privacy for data handling
- security for access, logging, and monitoring
- procurement for vendor terms
- leadership for accountability
That is not bureaucracy. That is how you avoid discovering, too late, that your “innovative AI feature” is actually a regulated system with no paper trail.
The Bigger Signal
The European Parliament’s June 14 vote is bigger than Europe. It is a message to the market: AI is moving into the same class as other regulated technologies. You can build fast, but you will increasingly have to build traceably. You can scale aggressively, but you will increasingly have to explain how.
For U.S. companies, the smart move is not to wait for the final version of the law and then scramble. The smart move is to start now:
- inventory your AI
- classify your use cases
- clean up your data trail
- tighten your vendor terms
- test your controls
- educate your leadership
Because once regulators start asking detailed questions, the answer “we thought the model was fine” is not a control.
And no, it is not a strategy either.
Related posts
Federal Preemption of State AI Laws: Trump's December EO and Its Legal Limits
Trump’s December 11 AI order launches a federal challenge to state AI laws, but its legal reach is narrower than the rhetoric suggests.
Read moreEU AI Act Phase 2: GPAI Provider Obligations Are Now Enforceable
As of August 2, 2025, general-purpose AI model providers are no longer waiting on guidance: the EU AI Act’s GPAI obligations are live.
Read moreEU AI Act Phase 1 Is Live: Prohibited AI Practices You Need to Stop Today
The EU AI Act’s Article 5 bans are now live, and teams need to stop any prohibited AI practice before regulators do.
Read more