President Trump has rescinded Executive Order 14110 on day one, and the headlines are already doing what headlines do best: making everyone either euphoric or apoplectic before the ink is dry. The order at issue, Biden’s October 30, 2023 AI executive order, was the federal government’s most visible attempt to shape AI development through a mix of safety, reporting, testing, and agency coordination. On January 20, 2025, Trump’s Initial Rescissions of Harmful Executive Orders and Actions explicitly revokes Executive Order 14110.
So what does that actually mean?
Not, contrary to what some LinkedIn posts would have you believe, “AI is now a free-for-all.” It means the federal policy posture is shifting from oversight to deregulation. That is a real change. It is also not the same thing as legal immunity, operational safety, or a sane risk program.
The order is gone. The risk isn’t.
Executive orders are powerful, but they are not magic. They can direct agencies, set priorities, and create momentum. They can also be undone with a signature, which is exactly what happened here.
What disappears with EO 14110 is the Biden administration’s federal AI agenda as an executive directive. What does not disappear are the obligations that already exist under other laws and regimes:
- privacy laws
- consumer protection rules
- employment and civil rights laws
- sector-specific requirements
- contract obligations
- intellectual property and data licensing restrictions
- security expectations, including basic controls around access, logging, and vendor management
In other words, your compliance program does not get to shrug and go home because the White House changed its mind.
If anything, this is the moment to ask a more useful question: what part of your AI governance program was actually dependent on the executive order, and what part was always just good risk management?
If the answer is “a lot more than we’d like,” that is information worth having.
Federal deregulation is not the same as no regulation
The political signal here is obvious. The new administration wants to reduce friction around AI development and reduce the compliance burden on developers and deployers. That will matter in federal procurement, agency guidance, and the overall tone from Washington.
But compliance programs do not live in a vacuum, and they certainly do not live only in Washington.
The EU AI Act is still there. The obligations don’t vanish because the United States decides to stop worrying out loud. If you build, deploy, or sell into Europe, your risk classification, documentation, data governance, human oversight, and vendor management still matter. A model that is “fine” in a deregulated federal environment can still be a problem in Brussels.
State law is also moving. Fast. The current federal moment may encourage some companies to slow-walk governance, but that would be a mistake. States are not waiting around for a philosophical conversion in the West Wing, and they certainly are not going to forgive you for a weak internal control environment because the national mood changed.
And then there is ISO 42001. That standard is not a political document. It is an operating system for AI management. It is becoming a common reference point for buyers, auditors, and boards because it answers a practical question: can you show that AI is being governed, not merely admired?
That question survives every administration.
What this means for boards and executives
For leadership teams, the immediate temptation is to translate this into budget language: “Can we cut the AI governance workstream now?”
Maybe. But only if the workstream was performative in the first place.
If your AI program is built on slide decks, optimistic vendor assurances, and a vague sense that “the model provider handles that,” then yes, you probably have an efficiency opportunity. It is called a control gap.
The better approach is to identify which controls are truly necessary for your business and which ones were just compliance theater. You do not want to over-engineer. You also do not want to discover, after a regulator, customer, or plaintiff asks questions, that nobody knows:
- what models are in production
- which data sets trained them
- whether the data rights are clean
- who approved their use cases
- how outputs are monitored
- whether humans can override bad decisions
- what the vendor contract actually says about indemnity, incident response, and model changes
That last one is often where the music stops.
A focused AI governance review pays for itself here. Not by inventing new paperwork, but by separating real risk from ceremonial risk. A proper AI audit, an AI footprint assessment, or an EU AI Act readiness review will tell you where the bodies are buried before someone else finds them.
What to do now
If you are responsible for AI governance, privacy, security, procurement, or board oversight, here is the short version:
- Inventory your AI use cases. Know where AI is actually being used, not where someone said it might be used in a committee meeting.
- Map the data. Training data, fine-tuning data, prompts, logs, outputs, and retention all matter. If you cannot explain the data lineage, you do not have governance.
- Review contracts. Vendor promises are not controls. Make sure your agreements address data rights, confidentiality, security, model changes, audit rights, and liability.
- Classify the risk. Some systems are low stakes. Some are not. Hiring, lending, healthcare, insurance, and customer-facing decision tools deserve extra scrutiny.
- Brief the board. The board does not need a 90-slide deck. It needs a clear answer to one question: what is our exposure, and what are we doing about it?
- Align to a framework. Whether that is ISO 42001, NIST AI RMF, or a hybrid internal standard, use something more durable than vibes.
Now is also the right time to connect AI governance with the rest of your control stack. Privacy, security, and compliance are not separate kingdoms. If your data handling is messy, your AI program will be messy. If your security program is weak, your AI program will leak. If your governance is informal, your audit trail will look like a kitchen drawer after a long weekend.
That is where firms like licens.io tend to get involved: AI governance and compliance assessments, board education, training data compliance, privacy and security readiness, and compliance-by-design development that does not treat controls as an afterthought. Because “we will clean it up later” is not a strategy. It is a confession.
The bottom line
Trump’s rescission of EO 14110 is a meaningful policy shift. It signals that the federal government is stepping back from the Biden-era oversight model and leaning toward deregulation.
But if your response is to relax your AI governance program, you are reading the wrong memo.
The practical reality is simpler: the federal temperature changed, but the underlying risk did not. EU obligations remain. State activity continues. Customer expectations are rising. Board scrutiny is increasing. And the easiest way to get caught with your hands off the wheel is to assume that a federal rescission means the machine can now govern itself.
It cannot.
If you are building with AI, buying AI, or betting your company on AI, the right response is not panic and it is not complacency. It is disciplined governance, clean contracts, good documentation, and controls that will still make sense when the political weather changes again.
Related posts
Federal Preemption of State AI Laws: Trump's December EO and Its Legal Limits
Trump’s December 11 AI order launches a federal challenge to state AI laws, but its legal reach is narrower than the rhetoric suggests.
Read moreEU AI Act Phase 2: GPAI Provider Obligations Are Now Enforceable
As of August 2, 2025, general-purpose AI model providers are no longer waiting on guidance: the EU AI Act’s GPAI obligations are live.
Read moreEU AI Act Phase 1 Is Live: Prohibited AI Practices You Need to Stop Today
The EU AI Act’s Article 5 bans are now live, and teams need to stop any prohibited AI practice before regulators do.
Read more