AI Governance

FTC Launches Operation AI Comply: Five Companies Charged with AI Washing

Jillian Bommarito

First, software was eating the world; now, it is apparently AI. And when the AI story gets a little too glossy, the FTC shows up with a very unromantic question: where is the evidence?

On September 24, 2024, the FTC launched Operation AI Comply, a law enforcement sweep aimed at companies that use AI hype to supercharge deceptive or unfair conduct. The agency’s message is not subtle. There is no AI exemption. There is no magic “innovation” shield. And there is certainly no special rule that says you can claim whatever you want as long as you sprinkle the words artificial intelligence over the top like confetti.

The sweep includes five actions against companies that, in one form or another, sold fantasy as functionality. Some promised AI legal services. Some sold AI-powered review generation. Others used AI branding to lure people into online storefront schemes that looked more like a cautionary tale than a business opportunity. If you were looking for a case study in AI washing, this is it.

What the FTC Actually Did

The five actions target:

  • DoNotPay, which claimed to offer “the world’s first robot lawyer” and promised AI that could generate legal documents and replace human legal expertise.
  • Rytr, which sold an AI “writing assistant” that included a “Testimonial & Review” feature capable of generating fake consumer reviews.
  • Ascend Ecom, which allegedly claimed its “cutting edge” AI-powered tools would help consumers earn thousands a month through online storefronts.
  • Empire Holdings Group LLC / Ecommerce Empire Builders, which marketed an “AI-powered Ecommerce Empire” and promised big monthly income from storefronts and training programs.
  • FBA Machine / Passive Scaling, which allegedly falsely promised guaranteed income through AI-powered online storefronts.

That is the common thread: AI claims tied to money, trust, or professional expertise are now enforcement magnets. If the promise is “our model does the work of a lawyer,” “our tool generates trustworthy reviews,” or “our software will make you rich,” the FTC wants substantiation, not vibes.

Why This Matters

The FTC is not saying companies can never use AI in marketing. It is saying that AI branding does not excuse deception.

That distinction matters. A lot.

If your product uses machine learning in some part of the workflow, that does not automatically justify claims like:

  • “Fully autonomous”
  • “Risk-free”
  • “Guaranteed earnings”
  • “As good as a lawyer”
  • “Human-quality reviews”
  • “Cuts labor costs by 70%”

Those are measurable claims. Measurable claims need evidence. If your team cannot point to testing, documentation, or a reasonable basis for the statement, then the marketing copy is doing more than stretching the truth. It is inviting the FTC to pay a visit.

And yes, the FTC is absolutely willing to ask whether the company actually tested the thing it is selling.

In the DoNotPay matter, the agency says the company did not conduct testing to determine whether its chatbot performed at the level of a human lawyer and did not hire or retain attorneys to evaluate the law-related features. In Rytr, the FTC says the service generated detailed reviews with material details that had no relation to the user’s input, meaning the output was almost certainly false when copied and published. In the storefront cases, the FTC says the earnings claims did not materialize, the evidence was lacking, and consumers were left holding the bag.

That is not “disruption.” That is old-fashioned deception wearing a hoodie and calling itself a startup.

AI Washing Is Not Just a Branding Problem

A lot of companies think AI washing is a reputation issue. It is not. It is a governance issue.

If the marketing team says one thing, the product team knows another, and legal is brought in after the launch video is already live on LinkedIn, you do not have an AI strategy. You have a compliance hangover.

The practical failure pattern is familiar:

  • Sales promises features the product cannot consistently deliver.
  • The product actually relies on human labor, but marketing describes it as autonomous.
  • The company uses “AI” as shorthand for “faster,” “smarter,” or “more advanced” without testing the claim.
  • No one keeps a file showing what the model does, what it does not do, and what evidence supports the public-facing statements.

That last one is the killer. The FTC does not care whether the slide deck looked impressive. It cares whether the claim was true and supported.

What Good Looks Like

AI governance and compliance stops being a nice-to-have here and becomes basic operational hygiene.

If you are putting AI in the market, you need a framework that answers a few boring but important questions:

  • What exactly does the system do?
  • What parts are automated, and what parts require human review?
  • What evidence supports each external claim?
  • What are the known limitations?
  • What happens when the model hallucinates, fails, or drifts?
  • Who signs off on the marketing language before it ships?

That is not bureaucratic theater. That is how you avoid becoming a headline.

At a minimum, companies should be maintaining:

  • Claim substantiation files for all AI-related marketing statements
  • Testing records showing performance under realistic conditions
  • Human oversight documentation where human review is part of the workflow
  • Data and training provenance records where model inputs matter
  • Board-level AI education so directors understand what the company is promising the market
  • Review controls that prevent the legal or compliance team from being the last people to hear about a product launch

And if you are buying an AI vendor, investing in one, or diligencing a target company, you should be asking the same questions. What is the actual AI footprint? What is the substantiation story? What is the risk if the marketing claims do not line up with the technical reality?

That is not just an AI governance question. It is a diligence question, a valuation question, and sometimes an existential question.

The Enforcement Signal Is the Point

The FTC did not pick five random companies by flipping through a hat labeled “startup nonsense.” It chose cases that illustrate a broader point: AI hype is now a consumer protection issue.

That matters because the market has spent the last few years treating AI as a kind of universal solvent. Every pitch deck got better. Every demo got more magical. Every product became “AI-powered” even when the AI part was thin, optional, or nonexistent. The result was predictable: more inflated promises, more consumer harm, and more scrutiny.

The FTC’s answer is equally predictable, if less glamorous: show your work.

If a product really can do what you say, great. Prove it. If it cannot, don’t market it that way. If the capability is conditional, say so. If the human is still doing the hard part, don’t hide that behind a chatbot and a slick landing page.

There is a lesson here for every company selling AI features, every investor backing them, and every board member approving the messaging. The market may reward speed, but regulators reward documentation.

The Bottom Line

Operation AI Comply is a warning shot. Not because the FTC hates AI, and not because innovation is on trial. It is because AI washing is just deception with a better label.

If your company is making AI claims, make sure they are substantiated, scoped, and reviewable. If your product touches legal services, earnings claims, consumer reviews, or other high-trust areas, tighten the controls even more. The edge cases are where bad facts turn into bad press, then bad investigations, and finally very expensive conversations with lawyers.

The age of casual AI claims is ending. Good riddance.

The companies that survive this phase will be the ones that treat AI governance as part of product design, not as a cleanup task after marketing has already promised the moon.

Related posts

Want to discuss this topic?

We'll give you a straight answer — not a sales pitch.