Last week I joined a panel at the Cybersecurity Summit in Boston, and one exchange stayed with me after the room cleared. A security leader in the audience asked, essentially: “If AI is supposed to make us faster, why does my team feel more exposed than ever?”
It’s the right question, and the honest answer is that AI coding tools were evaluated on productivity, not security. Benchmark scores, developer satisfaction, GitHub stars. Nobody asked whether the dependencies those tools were pulling in were safe to run in production. That governance conversation got skipped, and organizations are now trying to have it retroactively, while AI is already embedded in their workflows.
There’s a cleaner way to approach this. Start clean, stay clean.
The Problem Isn’t AI. It’s What AI Is Pulling In
AI coding assistants do not generate code from nothing. They suggest patterns, and those patterns include open source software dependencies. A developer accepts a suggestion in a single keystroke. No provenance check. No policy review. No visibility into whether that package was actively maintained, recently compromised, or, in the most acute version of this problem, a package that does not exist at all.
That last one deserves more attention than it gets. AI models sometimes hallucinate package names. Attackers monitor for those hallucinated names, register them with malicious payloads, and wait for developers to install exactly what the AI suggested. The dependency confusion attack is not a theoretical risk. It is an active attack pattern in 2026, and most enterprise security programs were not designed to catch it because it exploits trust at the point of ingestion, before any scanner runs.
IBM’s 2026 Cybersecurity Trends report confirms that AI is expanding the attack surface and enabling new threat categories at a pace traditional defenses were not built for. [BM Technology 2026 Cybersecurity Trends Report] Meanwhile, on April 15, 2026, NIST formally acknowledged it can no longer enrich all CVEs. Submissions increased 263% between 2020 and 2025. The enumeration model that scanner-based security programs depend on is hitting a structural ceiling at exactly the moment AI is accelerating open source consumption.
Scanning is not governance. It never was. In an AI-driven development environment, it is even less sufficient than before.
The Accountability Gap Is Real, and It’s Growing
When a developer manually selects a dependency, someone made a decision. When an AI tool suggests it and the developer accepts, that moment disappears. Authorship is distributed. No one owns the decision. Security and legal teams are asking who is accountable for AI-introduced dependencies, and most organizations do not have an answer.
That accountability gap has a measurable cost. The industry average mean time to remediate a critical CVE is upwards of 54 days. During that window, the organization is exposed and the security leader is accountable. In an AI-assisted development environment, where dependency volume is growing at machine speed, that window does not shrink on its own.
CISOs may own the risk, but they often do not own the decisions that created it. AI coding tools get adopted by engineering. Procurement criteria get set without enough security input. The governance gap opens upstream, before security ever has visibility. By the time you are reviewing what AI produced, the dependency decisions have already been made.
Start Clean, Stay Clean
The organizations getting this right are not slowing down AI adoption. They are making sure governance happens at the same time as adoption, not after the exposure is already in place.
In practice, that means governing at the point of ingestion. Not reviewing outputs after the fact. Not scanning what already entered the environment. Controlling what AI tools can access in the first place.
ActiveState’s Curated Catalog is built for exactly this. When an AI coding assistant requests a dependency, it draws from a policy-governed catalog of open source software components built from source inside a SLSA Level 3 environment, not from a public registry. Every component ships with a signed attestation and a complete SBOM. The provenance chain covers what AI introduced, not just what developers explicitly chose.
Across 12 major language ecosystems, including Python, JavaScript, Java, C++, Go, Rust, and more, components are scored for real-world risk across the full dependency tree, including transitive dependencies that most scanners miss. Known malicious packages are blocked before they enter the catalog. Versions released within the last 14 days are excluded by default, eliminating the window most supply chain attacks exploit.
When something does go wrong, the response looks different. Not days of archaeology through CI logs trying to answer “are we affected?” A provenance chain already on record. A contractual remediation SLA already running. Critical CVEs remediated within 5 business days, compared to the 54-day industry average. The catalog automatically rebuilds and redistributes when a community-approved fix is available. Your team reviews the outcome, not the process.
That is what start clean, stay clean looks like in practice. You are not just reducing the CVE count at a point in time. You are building a development environment where the default source is already governed, and where AI agents and human developers are drawing from the same trusted foundation.
The Governance Decision Cannot Wait
The hardest part of this conversation at the summit was not the technical questions. It was the number of security leaders who recognized the problem but assumed the governance conversation could wait until AI tooling was more mature.
It cannot. The regulatory environment is not waiting. EU Cyber Resilience Act Phase 1 reporting obligations take effect September 11, 2026. SSDF compliance is already a federal contracting condition. The SEC is asking organizations to demonstrate how software was governed from the point of origin, not just report what went wrong after the fact.
Establish a policy-governed catalog as the default source for open source software before AI tools are too deeply embedded to change the default without a fight. Add security as a first-order evaluation criterion in AI tool procurement. Treat open source software as a strategic decision with board-level accountability.
The opportunity and the threat in agentic AI are the same thing: autonomous systems that can pull dependencies, make changes, and deploy code without a human review step. The organizations building governance infrastructure now will have a significant advantage over those who wait.
Start clean. Stay clean → Check out the ActiveState Curated Catalog.
Frequently Asked Questions
The ActiveState Curated Catalog is a policy-governed repository of open source software components built from source inside a SLSA Level 3 environment. When an AI coding assistant requests a dependency, it resolves from the catalog rather than a public registry. The developer workflow does not change. The governance does.
No. The catalog integrates natively with the artifact repositories and package managers your teams already use. Developers and AI agents pull components the same way they always have. The difference is that what they receive has already cleared a security threshold before it reaches them.
ActiveState owns the remediation. When a community-approved fix is available, we rebuild the component from source and automatically redistribute it. Your team does not manage the rebuild cycle. Our contractual SLA for critical CVEs is 5 business days, compared to the 54-day industry average.
ActiveState covers 12 major language ecosystems including Python, JavaScript, Java, C#, C++, Go, Rust, R, Perl, and more, with full transitive and OS-level dependency coverage.
The catalog excludes versions released within the last 14 days and blocks any component flagged as malicious. When an AI tool suggests a package that does not exist in the catalog, it does not fall back to a public registry. Governance holds at the point of ingestion.


