As of August 2, 2025, the EU AI Act has moved from theory to operations for general-purpose AI model providers. The European Commission’s guidelines and Q&A make the point plainly enough: if you place a GPAI model on the EU market, the compliance clock is no longer hypothetical.
And yes, that matters whether you are a frontier lab, a foundation-model startup, or an enterprise that fine-tunes open models and then packages them into a product. The old strategy of “let’s see how this shakes out” is now less strategy and more wishful thinking with a compliance memo attached.
What changed on August 2?
The key shift is that Chapter V of the AI Act is now in application for GPAI model providers. In plain English, the obligations are live. The Commission says providers must now document the model, support downstream users, publish a training-content summary, and put a copyright compliance policy in place.
That is not just regulatory theater. It means model providers need to know what they trained on, how they trained it, what the model can and cannot do, and what information they can hand to downstream AI system providers without playing hide-the-ball.
For companies that have been treating training data provenance as an optional internal curiosity, this is the part where the music stops.
The baseline obligations
The Commission’s guidance lays out the baseline obligations for GPAI providers:
- Technical documentation must be drawn up and kept current, including the training and testing process and evaluation results.
- Downstream documentation must be provided to AI system builders so they can understand the model’s capabilities and limitations.
- Copyright compliance policies must be in place, including measures to respect rights reservations.
- A public training-content summary must be published using the Commission’s template.
- An EU representative is required if the provider is established outside the Union.
That last point is easy to miss, and easy to regret missing. If you are selling into Europe, “we’re based in San Francisco” is not a compliance strategy.
The public summary is especially important because it is where a lot of teams will discover that “we trained on a lot of stuff” is not the same as “we can satisfy Article 53.” The Commission’s template is mandatory, and where information is unavailable or disproportionate to retrieve, providers must say so and explain the gap. That is not a loophole; it is a paper trail.
Systemic risk models get the heavier lift
The AI Act treats some models as more consequential than others. The Commission points to models with very high training compute, including those exceeding 10^25 FLOP, as the most advanced or impactful models likely to raise systemic-risk concerns.
For those models, the obligations get stricter. The provider must go beyond documentation and copyright policies and also:
- perform model evaluations, including adversarial testing,
- assess and mitigate systemic risks,
- track, document, and report serious incidents,
- maintain adequate cybersecurity protections.
That is the right instinct from a policy standpoint. If a model has broad reach and can be deployed across the economy, then “we think it’s probably fine” is not a satisfactory risk framework. It is, at best, a sitcom.
Why this matters beyond legal compliance
The compliance layer here is only half the story. The other half is commercial.
If you are a provider, your documentation package now affects:
- procurement,
- product liability conversations,
- customer diligence,
- insurance discussions,
- investor diligence,
- and, in some cases, valuation.
We have been in enough diligence processes to know the pattern: once a framework becomes enforceable, it stops being “legal overhead” and starts becoming a representation problem. Buyers want to know what was trained, where the data came from, whether the copyright policy exists, whether downstream users are getting accurate model information, and whether the system has been evaluated with something more serious than optimism.
That is where AI governance and technology diligence start to overlap. A GPAI compliance assessment is not a legal-only exercise. It is part policy review, part data inventory, part engineering audit, and part product reality check. If you are preparing board materials, acquisition materials, or investor disclosures, the same work often supports broader AI governance, privacy, and security reviews as well.
Open-source is not a magic word
There is an exception in the Act for some free and open-source models, but it is not a universal escape hatch. The Commission’s guidance is clear that the exception does not apply the same way to models with systemic risk.
So no, slapping an open license on a model and publishing a GitHub repo does not automatically make the problem disappear. Regulators have seen that trick before. They are not impressed.
And if your team is modifying an existing GPAI model, the analysis gets even more interesting. In some cases, downstream fine-tuning can turn the modifier into a new provider. Translation: “we just tuned it a bit” may be true technically and irrelevant legally.
What companies should do now
The fastest path to trouble is pretending this is a one-person legal project. It is not.
The teams that will do well here will pull together legal, privacy, security, model engineering, and data governance into a single workflow and produce the artifacts the AI Office can actually evaluate. That means:
- mapping training data sources,
- classifying third-party and reserved content,
- preparing the public training summary,
- documenting development and testing,
- lining up downstream disclosures,
- and pressure-testing incident response and cybersecurity for higher-risk models.
That is also where a focused AI governance & compliance review earns its keep. In practice, the work usually includes GPAI compliance assessments, training data summary preparation, and EU AI Act readiness analysis. If your model stack also touches privacy or security obligations, the same exercise can surface issues that would otherwise show up later, usually at a more expensive moment.
The broader signal
The message from Brussels is not subtle: general-purpose model governance is no longer a future conversation. It is here, it is scoped, and it is starting with documentation, transparency, and responsibility for what gets built on top of these models.
That is the right place to start. The market has spent years talking about model capability. The EU AI Act is now insisting on model accountability.
And frankly, that is overdue.
Official sources
- Commission FAQ on GPAI obligations
- Commission guidelines on GPAI providers
- Commission template for training-content summaries
Related posts
Federal Preemption of State AI Laws: Trump's December EO and Its Legal Limits
Trump’s December 11 AI order launches a federal challenge to state AI laws, but its legal reach is narrower than the rhetoric suggests.
Read moreEU AI Act Phase 1 Is Live: Prohibited AI Practices You Need to Stop Today
The EU AI Act’s Article 5 bans are now live, and teams need to stop any prohibited AI practice before regulators do.
Read moreTrump Rescinds the Biden AI Executive Order: What It Means for Your Compliance Program
President Trump’s rescission of Executive Order 14110 changes the federal AI posture, but it does not change your underlying compliance obligations.
Read more