The Message Is Not Subtle
CISA and the FBI have done something that would have sounded dramatic a few years ago and now reads like basic operational guidance: they are urging software manufacturers to publish memory safety roadmaps and move new development toward memory-safe languages like Rust, Go, and Java rather than treating C and C++ as the default forever.
That is not a minor tweak to the security conversation. It is a very public signal that memory safety is no longer a niche compiler-nerd concern or a language-lawyer hobbyhorse. It is now a security posture issue.
And honestly, it should have been obvious sooner.
For decades, software teams have treated memory safety defects as the cost of doing business. Buffer overflows, use-after-free bugs, out-of-bounds reads and writes, dangling pointers, and the rest of the usual suspects were handled the way a lot of organizations handle recurring pain: by building processes around the pain instead of removing the pain. Patch faster. Scan harder. Buy another tool. Hire another consultant. Repeat until morale improves.
That strategy has a shelf life.
The agencies are now saying what security teams, large platform vendors, and a growing number of engineers have been saying for years: if you keep choosing memory-unsafe defaults, you keep inheriting memory-unsafe outcomes.
Why Memory Safety Keeps Coming Back
The headline here is not just that CISA and the FBI are talking about memory safety. It is that they are talking about it in a way that moves the issue from “best practice” to “manufacturing responsibility.”
The CISA guidance says software manufacturers should create and publish roadmaps showing how they will reduce and eventually eliminate memory-unsafe code in their products. That matters because roadmaps create accountability. A roadmap is not a slogan. It is a statement of intent, sequencing, ownership, and tradeoffs.
That is a big shift from the old model, where software vendors could shrug and say, “Yes, C and C++ are risky, but this is just how the industry works.” That excuse is wearing thin. Public agencies are no longer treating memory safety as an abstract preference. They are treating it as a supply-chain and national-security concern.
Why? Because memory safety bugs are not random little annoyances. They are the kind of defects that show up in the most painful places: network-facing services, parsers, authentication layers, file handlers, and privileged code paths. The code that gets exploited is often the code that exists to do something important, fast, and at scale. Which is another way of saying: exactly the places where you do not want sloppy memory handling.
The irony is almost too neat. The software industry spent years optimizing for performance, control, and low-level precision. Then it discovered that those same characteristics can also make it easier to shoot yourself in the foot. Repeatedly. In production. At scale.
What the Roadmap Actually Means
A memory safety roadmap is not just “we will use Rust more.” If it is done correctly, it is an operating plan for changing the software development life cycle.
At minimum, it should answer questions like:
- Which products or components are still memory-unsafe?
- Which of those are customer-facing, network-facing, or otherwise high risk?
- Which new code will be greenfield in a memory-safe language?
- What will be refactored, wrapped, replaced, or retired?
- What training, tooling, and governance will support the transition?
- How will leadership know whether progress is real?
That is why the guidance is interesting to both engineers and executives. Engineers care because this affects architecture, code generation, build systems, and dependencies. Executives care because this changes risk, liability, customer trust, and cost.
The CISA framing also emphasizes something important: ownership. Not “we bought a scanner.” Not “we have a policy.” Ownership. That means manufacturers are expected to understand their security outcomes, not just outsource their conscience to a handful of tools and hope for the best.
That is the right instinct.
Security-by-design only works when the organization stops treating secure outcomes as someone else’s problem. If the language choice is part of the problem, then the language choice has to be part of the answer.
Is C++ on Borrowed Time?
Short answer: for new code, maybe more than a little.
Long answer: no, C and C++ are not disappearing tomorrow. There is too much infrastructure, too much embedded software, too much systems code, and too much legacy IP for that. Anyone promising an instant rewrite is either selling something or has never run a serious software portfolio.
But if you are asking whether C++ can remain the unquestioned default for new systems development, the answer is getting harder to defend.
The agencies are not saying every existing C or C++ codebase should be thrown into a bonfire. They are saying the industry needs a credible transition plan, and that future software should not keep recreating the same class of errors because “that is what we have always done.”
That is the key distinction. This is not a demand for ideological purity. It is a demand for better risk management.
And from a practical standpoint, the market already knows how to respond to that kind of signal. Procurement teams start asking questions. Investors start asking questions. Security questionnaires get sharper. Customers begin asking whether the vendor has a modernization plan or just a stack of excuses.
So is C++ on borrowed time? For greenfield work in security-sensitive systems, the answer is increasingly yes. Not because C++ is “bad” in some absolute sense, but because the cost of its failure modes is now too well understood to ignore.
What Software Teams Should Do Now
If you build software, the right reaction is not panic. It is prioritization.
Start by inventorying where memory-unsafe code still exists. Then separate the work into categories:
- New development that can be done in Rust, Go, Java, or another memory-safe language
- High-risk components that should be migrated first
- Low-level components that may need to remain in C or C++ for now, but with tighter hardening and governance
- Interfaces where memory-unsafe and memory-safe code must coexist, which is where many teams will get their first unpleasant surprises
Compliance-by-design development matters here too. If your engineering organization is modernizing the stack, the conversation should not stop at syntax. It should include secure SDLC controls, dependency review, SBOM analysis, code review practices, threat modeling, and a realistic plan for the ugly edges of migration.
That is the part that often gets missed. Language choice is important, but language choice alone does not save you from bad architecture, sloppy boundaries, or poor release discipline. You can write insecure Rust code if you try hard enough. You can also write highly disciplined C++ in a constrained environment. The point is not perfection. The point is reducing the probability and blast radius of failure.
For buyers, investors, and boards, this belongs in technology diligence now. If a company says its core platform is built on C/C++, the next question is not “cool.” The next question is “what is the memory safety roadmap, where is the risk concentrated, and who owns the transition?”
That is especially true in deals where software quality, product security, and AI-enabled services are material to valuation. Memory safety is not just a developer concern. It is a product risk, a diligence issue, and in many cases a valuation input.
The Bigger Signal
The bigger story here is not that CISA and the FBI like Rust. The bigger story is that public-sector security thinking is catching up with what the engineering world already knows: some classes of bugs are much cheaper to prevent than to manage forever.
That is why this guidance matters. It changes the expected value of doing nothing. It makes “we’ll keep patching it” look less like a strategy and more like a confession.
Like most risks, this one does not go away when we ignore it. The difference now is that the agencies are saying the quiet part out loud: if you are still building new software the old way, you are not preserving optionality. You are accumulating debt.
And eventually, the bill comes due.
Related posts
SFC v. Vizio: A Court Says GPL Compliance Is a Contractual Duty
A December 4, 2025 tentative ruling in SFC v. Vizio suggests GPL compliance can sound in contract, not just copyright, with real consequences for end users.
Read moreRedis Goes Source-Available: Valkey Fork Launches Within 30 Days
Redis's March 2024 license change triggered a rapid Valkey fork, reminding buyers that open-source governance can become a real diligence issue overnight.
Read moreHashiCorp Goes BSL: What the Terraform License Change Means for Your Infrastructure
HashiCorp's move to BSL 1.1 for Terraform and other core products changes the rules for vendors, integrators, and anyone packaging infrastructure tooling commercially.
Read more