On March 2, 2026, the Supreme Court did not decide to make new law. It did something more annoying for people hoping AI would somehow invent a brand-new category of property rights out of thin air: it denied certiorari in Thaler v. Perlmutter, leaving in place the rule that copyright requires human authorship.
That is the whole story, and also not the whole story.
Because once you strip away the legal drama, the Court’s refusal to take the case says something very practical about the way companies should think about AI-generated content. If a machine creates something on its own, and no human meaningfully authors it, then what exactly do you own?
That is not a philosophical question. It is a data strategy question, a valuation question, and in a lot of organizations, a who-signed-off-on-this question.
What SCOTUS Actually Did
The March 2 order list includes Thaler, Stephen v. Perlmutter, Shira, et al., No. 25-449. Certiorari is denied. That means the Supreme Court is not taking up the case, and the D.C. Circuit’s March 18, 2025 decision remains the controlling result.
That D.C. Circuit opinion is the key piece here. The court held that the Creativity Machine, Dr. Stephen Thaler’s generative AI system, cannot be the author of A Recent Entrance to Paradise because the Copyright Act requires the work to be authored in the first instance by a human being.
That is not a minor procedural footnote. It is the law, for now, and probably for a while.
The Copyright Office has said the same thing for years in its registration guidance. The D.C. Circuit just made it painfully clear that this is not some temporary bureaucratic preference that vanishes if you ask politely enough. The machine is not the author. The machine is the machine.
Why This Is Not Really New, But Still Matters
If this sounds obvious, good. It should.
Copyright has always been about human creativity. The statute protects “authors,” not processes. A person can use tools, assistants, software, and now AI. But if the tool is doing the creative work without human authorship, the result is not automatically a copyrightable work.
That distinction matters because a lot of AI discussions blur the line between:
- AI as a tool used by a human
- AI as a collaborator
- AI as an autonomous generator
- AI as an alleged legal person wearing a trench coat
Only the first category is comfortably familiar. The others quickly get weird.
And when legal systems get weird, the answer usually is not “invent a new exception because the demo looked impressive.” The answer is “show your work.”
That is why the case is useful. It forces a very boring, very necessary question: who actually authored the output?
If the answer is a human, fine. If the answer is “the model,” then you are not looking at ordinary copyrighted content. You are looking at output that may be difficult or impossible to treat as a protected asset.
What Thaler Was Testing
Thaler’s argument was, in essence, that his generative AI system created the work and should therefore be recognized as the author. The Copyright Office rejected that registration. The district court agreed. The D.C. Circuit agreed. The Supreme Court declined to get involved.
That leaves a useful lesson for anyone building AI-native products or content pipelines: ownership does not magically appear because something is impressive, expensive, or useful.
If your marketing team says your platform can generate images, copy, audio, or code at scale, the next question is not “cool.” It is:
- Who authored the output?
- What human review occurred?
- What rights attach to the result?
- Can you prove those rights if a customer, investor, or buyer asks?
If your answer is “the model did it,” that is not an ownership strategy. That is a liability with a user interface.
Why This Matters For Companies
This is where the legal rule turns into a business problem.
A company that relies on AI-generated content for product assets, marketing, documentation, training materials, or customer deliverables needs to know whether that content is actually protectable. Otherwise, the company may be building value on something that cannot be owned the way the spreadsheet assumes.
That matters in at least four ways:
- Licensing: if you cannot own the output, you may not be able to license it the way you planned.
- Enforcement: if a competitor copies it, your remedies may be weaker than you thought.
- Valuation: if a buyer thinks it is acquiring proprietary content, the asset may be less valuable than advertised.
- Disclosure: if you told investors or customers the content was original, protected, or exclusive, that statement now needs to survive actual scrutiny.
In other words, this is not just a copyright issue. It is a diligence issue.
For PE, VC, and strategic buyers, this is exactly the kind of thing that belongs in a tech diligence sprint. If a target company says it has a large library of AI-generated assets, ask whether those assets are human-authored, AI-assisted, or fully machine-generated. If there is a real commercial dependency on ownership, that answer affects the risk profile immediately.
And if the company is using AI in production, it should already be thinking about AI governance & compliance and copyright-clean AI development. Not because lawyers enjoy extra paperwork. Because a rights problem in the content layer eventually becomes a problem in contracts, product claims, and enterprise trust.
What To Do Instead
The fix is not to ban AI. That would be as useful as banning spreadsheets because some people use them badly.
The fix is to build a workflow that preserves human authorship where you need copyright protection and documents the rest of the process where you need defensibility.
That means:
- keeping records of who created what,
- documenting how much human editing or selection occurred,
- separating machine-generated drafts from final human-authored versions,
- avoiding sloppy claims that output is “exclusive” or “proprietary” when it may not be,
- and making sure legal, product, and marketing are not operating from three different realities.
If the content is important enough to matter commercially, it is important enough to document.
That is also where broader data governance comes in. Ownership of AI-generated content is not just a legal label. It is a chain-of-custody problem. You need to know where the content came from, how it was created, what tools touched it, and what your organization can actually assert about it.
Absent that, you are doing the corporate equivalent of writing “trust me” on a shipping manifest.
The Practical Bottom Line
The Supreme Court’s denial in Thaler v. Perlmutter does not create a flashy new rule. It does something better: it leaves standing the old one that still matters most. If there is no human author, there is no copyright.
That should be uncomfortable for any company trying to treat AI output as a clean asset class without doing the underlying governance work. It should also be clarifying.
The market does not need more magical thinking. It needs better documentation, cleaner rights analysis, and fewer people pretending the model is a legal person because the slide deck ran out of room.
If your organization is using AI in production, this is the time to get serious about authorship, ownership, and provenance. Otherwise, you may discover that the thing you thought you owned was never yours to own in the first place. And that is a very expensive way to learn the difference between a generated file and a protectable asset.
Related posts
Music Industry Sues Anthropic for $3.1B: AI Training Liability Keeps Growing
Universal Music, Concord, and ABKCO just turned Anthropic’s training-data problem into a $3.1 billion copyright fight.
Read moreAnthropic's $1.5B Copyright Settlement: What It Means for AI Training Economics
Anthropic's $1.5 billion settlement shows that copyright risk in AI training data is no longer theoretical; it is a balance-sheet item.
Read moreCopyright Office Part 3: AI Training on Copyrighted Works Is Not Clearly Fair Use
The Copyright Office’s Part 3 AI report makes one thing plain: training on copyrighted works is not automatically fair use, so provenance and licensing matter now.
Read more