AI Governance

Biden's AI Executive Order: The Most Comprehensive Federal AI Action to Date

Jillian Bommarito

President Biden signs Executive Order 14110 on October 30, 2023, and the message is pretty hard to miss: AI is no longer something the federal government is content to admire from a safe distance. It is now a governance problem, a national security problem, a privacy problem, a civil rights problem, and, yes, a compliance problem.

That is not hyperbole. It is literally the structure of the order.

What The Order Actually Does

The official title is Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, which is a bureaucratic way of saying, “We have decided to stop pretending the market will self-correct.” The order directs federal agencies to act on AI safety testing, red-teaming, watermarking, privacy, critical infrastructure, biosecurity, and federal procurement. It also tells the government to build the internal machinery needed to regulate AI more seriously.

That matters because this is not just a press release. The order requires agencies to produce guidelines, reports, pilots, and standards on a pretty aggressive schedule. NIST is tasked with developing AI safety and security guidance, including companion resources for generative AI and secure development practices for dual-use foundation models. The Department of Energy is told to build AI evaluation tools and testbeds. Homeland Security gets involved in critical infrastructure, cyber defense, and biosecurity. OMB gets a seat at the table for guidance on how the federal government itself labels and authenticates digital content.

In other words, the federal government is not just saying “AI should be safe.” It is building a process to make people prove it.

Why This Order Stands Out

A lot of AI policy to date has been voluntary, aspirational, or vaguely motivational in a way that makes consultants feel useful and lawyers feel busy. This is different. EO 14110 is a whole-of-government order with teeth.

The order is built around eight guiding principles, and the important ones are not subtle:

  • AI must be safe and secure
  • AI must support responsible innovation and competition
  • AI must not be deployed in ways that harm workers
  • AI policy must reflect equity and civil rights
  • Consumers need real protections
  • Privacy and civil liberties matter
  • The federal government has to manage its own AI use
  • The U.S. wants to lead globally, not just react locally

That is a very broad mandate. It reaches from model development to labor markets to public-sector procurement to content provenance. The administration is saying, in plain English, that AI is not just a product category. It is infrastructure.

And once you think about it that way, a lot of the rest of the order starts to make sense.

The Parts Operators Should Care About

If you build, buy, deploy, or finance AI systems, there are several sections here that should make you sit up straighter.

First, the order defines AI red-teaming as structured testing to find flaws and vulnerabilities. That is not a cute branding exercise. It is a formal expectation that model behavior should be probed before deployment, especially for dual-use foundation models.

Second, the order goes after the frontier-model supply chain directly. Within 90 days, Commerce is directed to require companies developing or intending to develop certain potential dual-use foundation models to provide reports on training activities, model-weight security, and red-team results. Until technical thresholds are finalized, the reporting requirement applies to models trained above 10^26 operations, or models trained primarily on biological sequence data above 10^23 operations. It also applies to large computing clusters above 10^20 operations per second for AI training.

That is not a typo. The federal government is literally drawing lines around compute and asking, “What are you doing with all that horsepower?”

Third, the order tackles synthetic content. Commerce is directed to report on standards and tools for authenticating content, tracking provenance, labeling synthetic content, detecting synthetic content, and stopping AI from generating child sexual abuse material or non-consensual intimate imagery. OMB then gets the job of issuing guidance so federal content can be labeled and authenticated. So yes, watermarking is in the federal vocabulary now, and not as a nice-to-have.

The order even defines watermarking as embedding hard-to-remove information into AI outputs, including photos, videos, audio, or text, to verify authenticity or provenance. That is a clear sign that content trust is moving from theory to policy.

Fourth, the privacy section is serious. Agencies are told to use privacy-enhancing technologies where appropriate, and Commerce is directed to develop guidelines for evaluating differential privacy protections. The order also pushes agencies to think harder about privacy impact assessments and the risks of data extraction, re-identification, and inference. In plain language: if your AI system treats personal data like an all-you-can-eat buffet, this order is not on your side.

Fifth, the civil rights and consumer protection language is unusually direct. The order points to risks in hiring, housing, healthcare, education, financial services, law, and transportation. That is a broad signal to regulators and companies alike: AI does not get a free pass just because it is “innovative.”

What This Means For Companies

In practice, this order will push the market toward a much more disciplined set of controls. If you sell AI into the federal government, or into regulated sectors that care about federal policy signals, this is now part of your operating environment.

At minimum, companies should be asking:

  • Do we have a model inventory?
  • Can we document training runs, model weights, and security controls?
  • Do we have red-team evidence, not just benchmark theater?
  • Can we explain what data we used, where it came from, and whether it creates privacy or copyright risk?
  • Can we label outputs, preserve provenance, and prevent misuse?
  • Have we briefed the board on AI risk in terms the board can actually use?

That is the part where AI Governance & Compliance stops being a slide deck and starts being actual work. You need policy, technical controls, documentation, escalation paths, and board education. If you are building a serious AI program, this is also where readiness work like AI audits, ISO 42001 preparation, and federal AI compliance planning starts to matter. Not because the acronym is fashionable, but because the government has decided the baseline is higher now.

The Bigger Signal

This order is not just about one administration’s view of AI. It is the federal government acknowledging that the default posture of “move fast and hope for the best” is not acceptable when the technology can affect critical infrastructure, cyber defense, biometric data, biosecurity, and civil rights.

That is the real story here. AI is no longer being treated as a novelty with good press. It is being treated as a system that can cause real harm if nobody is watching the levers.

And honestly, that is overdue.

The easy part was building the models. The harder part is proving they are safe enough to use, secure enough to trust, and governed well enough to survive contact with the real world. The federal government has now made clear that it expects answers, not vibes.

Blow it up. Put it in the trash? Not quite. But the era of casual AI deployment is ending, and the compliance burden is getting very real, very quickly.

Related posts

Want to discuss this topic?

We'll give you a straight answer — not a sales pitch.