The open source consumption model most enterprises operate on was built for a world that no longer exists. It assumed human-paced adoption, human-paced review, and a human understanding the licensing type and terms (Apache 2.0, MIT, or AGPL) before a dependency went into production. None of those assumptions survive contact with AI-assisted development. The industry has been papering over that gap for years, and the Cloud Security Alliance’s new Zero Trust Guidance for Building a Resilient Enterprise names it directly: software supply chain security is no longer a niche AppSec concern. It is a core organizational resilience problem. And resilience at the organizational level requires governance at the point of consumption.

Jonny Rivera, our Head of Product Management, flagged a Lobsters thread making the same argument from a different angle. The thread was a response to a post bluntly titled “No One Owes You Supply Chain Security,” and the top comment, from HashiCorp co-founder Mitchell Hashimoto, was the one that stuck: open source is not a supply chain. A supplier has a formal relationship with a downstream entity. Open source has none of that.

I watched this shift happen in real time at Cloud Foundry Foundation. Open source software moved from a limited-use tool to the core of every enterprise software stack within a handful of years. What did not keep pace was how organizations thought about it. Most still treat open source software as free enterprise-grade software when it is something different: community-built software, shared with a community, with no supplier standing behind it. The consumption model never evolved with the consumption.

The gap has been there. AI made it impossible to ignore.

The “as is” clause in any open source license is a feature, not a flaw. It is an honest description of how the open source model works. Maintainers are not suppliers, and open source projects are not product companies. They do not owe enterprise security teams a warranty, a patch timeline, or a compliance trail. For years, users of open source software accepted that contract: you get access to great software for free, and you own what you do with it.

Organizations built their own processes around that contract. They maintained their versions against upstream updates, scanned dependencies, triaged CVEs, and patched what they could. That was never a strategy. It was coping. And AI has made coping untenable.

AI coding assistants are pulling in more open source software than development teams ever did before. They do not evaluate whether a package is actively maintained, free of known vulnerabilities, or even a real package. They pull dependencies at machine speed from registries built on the “as is” premise. Sonatype identified more than 454,000 new malicious open source packages in 2025, bringing the cumulative total past 1.2 million across npm, PyPI, Maven Central, and NuGet. 99% of that volume landed on npm. 2025 was also the year npm produced its first self-replicating worm-style malware, Shai-Hulud. The attack surface is expanding at machine speed. The security teams accountable for it are not.

Scanning for what entered your environment is not a strategy. It is a record of what you missed.

Curate and Govern, Not Scan and Pray

The CSA guidance on dependencies is direct: do not implicitly trust third-party and open source components. Use private artifact repositories. Scan early. Pin to vetted versions. Treat deployment as a Zero Trust policy enforcement point.

That is the right architecture. The gap the guidance does not fill is where the vetted versions come from. For most organizations, the answer is still their own security team, manually researching CVEs, managing the rebuild cycle, and trying to keep pace with a problem that now moves faster than any internal team can scan its way across. The answer to scan-and-pray is not more scanning. It is governance embedded at the point of consumption rather than bolted on after the fact: a single trusted source of open source components, continuously remediated, delivered through the tools developers already use.

That is the formal relationship Hashimoto is pointing to when he says someone has to charge for this. The license gives you the freedom to use the software. A commercial relationship gives you the warranty the license explicitly withholds.

What the Regulatory Environment Has Already Decided

Security leaders I speak with understand the “as is” clause. They know no one owes them anything. What they cannot do is hand a regulator an open source software license and call it due diligence.

The SEC’s breach notification requirements, the EU Cyber Resilience Act, and the personal liability that lands on security leaders when an incident hits the headlines: none of it cares about open source licensing norms. The CSA makes the point plainly. SBOM is not a compliance checkbox. It is a resilience enabler. Complete, machine-readable provenance for every component is the documented due diligence regulators now expect.

The industry average mean time to remediate critical CVEs still sits above 60 days. For the security leader accountable for what ships, that window is not just a security gap. It is a personal liability event waiting to happen.

The “as is” clause was never the problem. The problem is that the enterprise security model was built on the assumption that someone was handling what the license explicitly said they were not. AI made that assumption visible. The regulatory environment made it untenable. What comes next is not a tooling question. It is whether security leaders treat open source consumption as the strategic decision it has always been, or keep absorbing the consequences of a decision they never formally made.

Schedule an OSS Risk Assessment to determine your risk.

Frequently Asked Questions

The "as is" clause is an accurate description of how open source works — maintainers are not suppliers and owe no warranty or patch timeline. The real problem is that enterprise security models were built on the assumption that someone else was covering what the license explicitly said they weren't. AI-assisted development has made that assumption impossible to ignore.

AI coding assistants pull in dependencies at machine speed without evaluating whether a package is actively maintained, vulnerability-free, or even legitimate. Sonatype identified over 454,000 new malicious open source packages in 2025 alone, bringing the cumulative total past 1.2 million. The attack surface is now expanding faster than any security team can manually manage.

Instead of scanning after the fact, organizations should establish a single trusted source of open source components that are built from source, continuously remediated, and delivered through existing developer tooling. Governance should be embedded at the point of consumption — not bolted on after a breach has already occurred.

The SEC's breach notification rules, the EU Cyber Resilience Act, and growing personal liability for security leaders all require documented due diligence that an open source license alone cannot provide. Regulators expect complete, machine-readable SBOMs as evidence of provenance and risk management — not just a reference to upstream licensing terms.

Scanning tells you what already entered your environment — it's a record of what you missed, not a prevention strategy. With the industry's mean time to remediate critical CVEs sitting above 60 days, reactive scanning creates a liability window. The solution is governance upstream: vetted, continuously remediated components with contractual SLAs, enforced before code ever reaches production.