First, the obvious question: can a White House executive order wipe away fifty different state AI regimes?
No. Not by magic. Not by press release. And definitely not because someone in Washington decided that “patchwork” sounds inconvenient.
On December 11, 2025, President Trump signed Ensuring a National Policy Framework for Artificial Intelligence, and the White House framed it as an effort to stop state AI rules from becoming a compliance maze. The accompanying fact sheet says the administration wants to protect AI innovation from “an inconsistent and costly compliance regime” and points to more than 1,000 state AI bills as evidence that the problem is real.
That part is not controversial. The United States is already living in the age of “AI governance by geography.” If you build, buy, or deploy AI in the U.S., you are not facing one rulebook. You are facing a stack of them. And they do not always line up neatly.
What the order actually does
The order does several things, but the centerpiece is the new AI Litigation Task Force.
Within 30 days, the Attorney General must establish a task force whose sole job is to challenge state AI laws that conflict with the policy in the order. The White House says those challenges may rest on the Commerce Clause, existing federal regulations, or other theories the Attorney General thinks are available. In plain English: the administration is teeing up lawsuits, not changing the Constitution.
The order also directs the Commerce Department to identify state laws that are considered onerous or inconsistent with federal policy, and it asks agencies to consider whether discretionary grants can be conditioned on states not enacting conflicting AI laws. It even tells the FCC and FTC to explore federal standards that could create preemption pressure down the line.
That is a lot of moving parts. It is also a reminder that the federal government has more than one lever here. Litigation is one. Funding is another. Rulemaking is a third. The order is basically a legal air horn: loud, useful for signaling, and not the same thing as a statute.
Where the legal limits are
Here is the part that matters for anyone who actually has to run a company: an executive order is not a lawmaking device.
It can direct federal agencies. It can set executive branch priorities. It can tell the Department of Justice to litigate. It can tell Commerce to study. It can tell the FCC and FTC to explore standards. But it cannot, by itself, repeal a state law.
That is the whole ballgame.
Preemption comes from the Constitution, from valid federal statutes, or from valid federal regulations issued within delegated authority. If there is no federal law on point, state law generally stays where it is until a court says otherwise. And even when preemption arguments are strong, the law does not disappear because a presidential order says “please and thank you.”
The order itself is careful on this point. It says implementation is subject to applicable law and it does not create enforceable rights. That is the legal equivalent of saying, “We are taking a big swing, but don’t confuse this with a done deal.”
So what are the actual battlegrounds?
The likely theories are the usual ones:
- Dormant Commerce Clause arguments, where the administration says state AI laws burden interstate commerce.
- Conflict preemption arguments, where state requirements supposedly interfere with federal law or federal regulatory choices.
- First Amendment arguments, especially where a state law is said to compel or distort model outputs or disclosures.
Those are real legal theories. They are also litigation theories. Which means they live in briefs, records, injunctions, and appeals, not in executive slogans.
And yes, that means state laws remain in effect pending litigation unless a court enjoins them or they are otherwise displaced by valid federal action. That is the boring answer. It is also the correct one.
Why this matters now
The administration is clearly targeting laws it sees as too restrictive, especially laws like Colorado’s AI consumer protections regime, SB24-205, which has become the poster child for state-level AI regulation. Whether you think Colorado is being prudent or overbroad, it is one of the laws now sitting in the crosshairs.
That matters because businesses do not get to live inside the political commentary. They have to live inside the actual compliance matrix.
If you are an AI developer, deployer, lender, insurer, healthcare company, retailer, or enterprise customer, the question is not “Who wins the press cycle?” The question is “What rules apply to my product, my customers, my training data, my disclosures, and my contracts?”
For many teams, the answer is still ugly:
- A state privacy law here.
- A sector-specific disclosure rule there.
- Consumer protection exposure in one jurisdiction.
- A deepfake or automated decision-making rule in another.
- And, if you operate in Europe, EU AI Act obligations that are mandatory regardless of what happens in Washington.
Federal preemption chatter does not erase that reality. It just adds another column to the spreadsheet.
What companies should do
The wrong move is to wait for the federal government to “solve” the problem. That is how people end up surprised, and usually not in a fun way.
The practical move is to build a governance stack that can survive both federal whiplash and state scrutiny. That means:
- Maintaining an inventory of AI systems, use cases, and jurisdictions.
- Mapping state consumer protection, privacy, and sector-specific requirements.
- Documenting model behavior, training data sources, and known limitations.
- Reviewing contracts for indemnity, audit rights, disclosure duties, and model-change notice.
- Making sure board members understand what the company actually ships, not what the slide deck implies.
- Aligning AI claims with real-world controls, because “we’re compliant” is not a strategy unless someone can prove it.
AI governance and compliance stops being a slogan here and starts being infrastructure. If a team is already doing AI audits, board AI education, training data compliance work, or EU AI Act readiness, they are already doing the right kind of work. They are building for the world that exists, not the one that a federal order hopes to create.
And if you are also dealing with privacy, security, and vendor diligence, the overlap is obvious. State AI laws, FTC risk, privacy obligations, and model governance are now part of the same conversation. The companies that treat them separately are usually the ones that end up rediscovering them together, in litigation.
The bottom line
Trump’s December EO is a serious signal, not a legal shortcut.
It tells us the federal government is willing to fight state AI laws aggressively. It tells us agencies may try to use funding, rulemaking, and enforcement policy as leverage. It tells us the administration wants a national AI framework, and wants it badly.
But it does not mean state law is gone. It does not mean companies can stop tracking state-by-state requirements. And it does not mean federal preemption is settled because the White House says so.
The more useful conclusion is simpler: the AI compliance environment is getting more contested, not less. If your governance program only works when the legal landscape is tidy, it is not a governance program. It is a wish.
And wishes are not a control environment.
Related posts
EU AI Act Phase 2: GPAI Provider Obligations Are Now Enforceable
As of August 2, 2025, general-purpose AI model providers are no longer waiting on guidance: the EU AI Act’s GPAI obligations are live.
Read moreEU AI Act Phase 1 Is Live: Prohibited AI Practices You Need to Stop Today
The EU AI Act’s Article 5 bans are now live, and teams need to stop any prohibited AI practice before regulators do.
Read moreTrump Rescinds the Biden AI Executive Order: What It Means for Your Compliance Program
President Trump’s rescission of Executive Order 14110 changes the federal AI posture, but it does not change your underlying compliance obligations.
Read more