The Quiet Part Is Loud
One of the reasons supply chain attacks are so effective is that they hide inside ordinary work. Build the thing. Test the thing. Deploy the thing. Check which files changed. Move on.
That last part sounds boring, which is precisely why it matters.
The tj-actions/changed-files GitHub Action, a popular utility for identifying files changed in a pull request or commit, has been compromised in a March 2025 supply chain attack tracked as CVE-2025-30066. In plain English: a trusted CI/CD helper got turned into a place where secrets could leak. Not because developers were reckless, but because the trust boundary was softer than anyone wanted to admit.
If you use third-party GitHub Actions, you are not just consuming code. You are inheriting someone else’s release process, access controls, tagging discipline, and incident response. That is a lot of faith to place in a YAML line.
What Happened
The tj-actions/changed-files action is designed to do one practical job: tell you what changed. That makes it useful in monorepos, gated deployments, test selection, release automation, and all the other places where teams try to avoid running expensive jobs when nothing relevant moved.
According to the NVD entry, versions through 45.0.7 were affected after tags v1 through v45.0.7 were modified to point at a malicious commit, 0e58ed8. The malicious code included updateFeatures logic that could expose secrets by reading actions logs. That is the sort of sentence that should make every platform engineer sit up a little straighter.
GitHub’s own release notes for the action tell the same story in less academic language: a critical security issue was identified, the compromised commit was removed from tags and branches, and users were told to review workflows executed between March 14 and March 15. If unexpected output appeared under the changed-files section, the guidance was to double-decode it and treat any secrets as compromised.
That is not a routine bug fix. That is a fire drill.
And yes, the usual instinct is to ask, “How bad can a file-diff helper really be?” Bad enough, apparently, if it runs inside a privileged workflow with access to repository secrets, tokens, or deployment credentials. In CI/CD, the boring job is often the one sitting closest to the crown jewels.
Why This Matters
This incident is not really about one action. It is about the shape of modern software delivery.
A lot of teams still think of CI/CD as build automation. That is too small a view. CI/CD is a decision engine with credentials. It decides what code gets tested, what gets packaged, what gets deployed, and what infrastructure is allowed to talk to what else. If a third-party action in that chain is compromised, the blast radius is not theoretical.
The uncomfortable truth is that many pipelines are built on trust patterns that were convenient, not resilient:
- Mutable tags are treated as if they were immutable releases.
- Third-party actions are granted more permissions than they need.
- Secrets are available to jobs that do not strictly require them.
- Audit trails exist, but no one is looking at them until after the mess.
Supply chain attacks thrive in that gap between “we trust this because it worked yesterday” and “we verified this because it matters today.”
There is also a governance lesson here. If your organization is subject to SOC 2, ISO 27001, privacy requirements, or customer security reviews, CI/CD pipelines are not an implementation detail. They are a control surface. They belong in your risk register, your vendor inventory, and your incident response playbooks.
The same logic applies whether you are a startup, a PE-backed roll-up, or a public company with auditors asking annoying but fair questions. Boring infrastructure is still infrastructure.
What To Do Now
If your workflows use tj-actions/changed-files, the immediate move is simple: identify where it runs, when it ran, and what it could access.
A practical response should include:
- Review any workflows that executed between March 14 and March 15, 2025.
- Check whether the action was pinned by tag or by commit SHA.
- Inspect logs for suspicious output in the
changed-filesstep. - Rotate any secrets that may have been exposed.
- Tighten permissions on
GITHUB_TOKENand other credentials used by workflow jobs. - Re-evaluate whether third-party actions really need access to secrets at all.
That last point is where a lot of teams get religion.
If an action only needs to tell you which files changed, it probably does not need deployment credentials, write access to releases, or a token with broad repo privileges. If it does, you have more of a privilege design problem than a tooling problem.
CI/CD pipeline security assessments and supply chain risk analysis stop being abstract consulting phrases here and start being useful work. The right questions are not “Does the pipeline run?” but “What does it trust, what can it read, and what happens if that trust is abused?” That is the difference between a clean build and a very expensive Tuesday.
A few hardening moves are worth calling out:
- Pin critical actions to immutable commit SHAs, not just tags.
- Limit secrets to the smallest set of jobs that truly need them.
- Use environment protections and approval gates for sensitive deployments.
- Treat action updates like dependency updates, because that is what they are.
- Monitor workflow behavior for unusual log output or outbound activity.
No, this will not make your pipeline glamorous. But security rarely arrives in a tuxedo. Usually it shows up with a checklist and some uncomfortable questions.
The Bigger Lesson
The deeper problem exposed by this attack is not that GitHub Actions are unsafe. It is that convenience has a habit of outrunning control.
Every time a team adds a third-party action, a package, a container image, or a transitive dependency, it extends the trust chain. Most of the time nothing goes wrong. And then one day a trusted component gets rewritten in place, and the organization learns that “works in CI” is not the same as “worthy of trust.”
That is why supply chain security should be part of the same conversation as privacy, compliance, and engineering risk. If the pipeline can leak credentials, then the pipeline can leak data. If it can leak data, it can create a privacy incident. If it can create a privacy incident, it belongs in the same control environment as everything else the board claims to care about.
Like most risks, this one does not go away when we ignore it.
The better answer is to design for the possibility that a tool you trust today may not deserve that trust tomorrow. Pin what can be pinned. Minimize what can be exposed. Review what runs. And when a utility action starts behaving like a Trojan horse with a README, do not overthink it.
Blow it up. Put it in the trash. Then rebuild the trust model like you actually expect someone to attack it.
Related posts
Delve and the 494 Fake SOC 2 Reports: What the Compliance Industry Should Learn
A Y Combinator-backed compliance startup allegedly fabricated 494 SOC 2 reports with auditor conclusions pre-written before clients submitted any evidence.
Read moreFive Supply Chain Attacks in Twelve Days: March 2026 Broke Open Source Trust
In twelve days, attackers compromised Trivy, Checkmarx, LiteLLM, Telnyx, and Axios — and the supply chain security model most organizations rely on did not survive.
Read moreThree More States, Three More Privacy Laws: 2026 Compliance Starts Now
Indiana, Kentucky, and Rhode Island all went live on January 1, 2026, which means privacy compliance just got a little less optional.
Read more