Playing With Fire
It’s hard not to recall Greek mythology when discussing generative AI. Just as Prometheus stole fire from the gods and unlocked humanity’s potential, AI’s impact on software development has felt similarly transformative. (Perhaps it served as inspiration for Jeff Bezos’s recently announced AI project.)
In place of fire, generative AI has given us “vibe coding,” placing the power of software development into the hands of anyone capable of typing a simple prompt. Tools such as Copilot, ChatGPT, and Claude Code are now commonplace, and for many developers, indispensable for productivity and efficiency. In a 2024 survey from GitHub, 97% of developers reported having used AI coding tools at work. But this newfound productivity hasn’t come without concern. More people writing code means more software and, in turn, more security issues for someone to address. And it doesn’t stop there.
Just as Prometheus’s act of defiance earned him an eternity of agony, today’s developers and DevOps teams face their own daunting prospect: AI-generated code is introducing vulnerabilities into production systems at unprecedented speed and scale.
In this article we’ll explore generative AI’s impact on the software supply chain, and outline a plan for defense.
The Growing Risk of AI-Generated Code Dependencies
Since generative AI entered the mainstream, its advantages have always been accompanied by controversy. Although copyright issues often dominate the discussion around code generation, recent research highlights a more urgent problem: the software supply chain is becoming increasingly exposed.
New Attack Vectors
Many developers are now familiar with the term AI “slop,” a label for content that is low quality, generic, or inaccurate. While the term initially applied to images and videos, it now extends to software dependencies.
A comprehensive academic study published in 2025 by researchers from the University of Texas at San Antonio, Virginia Tech, and the University of Oklahoma analyzed 576,000 code samples created by 16 different LLMs. Nearly 20 percent of the recommended packages did not exist in any public registry. These hallucinated dependencies have opened the door to an emerging threat known as “slopsquatting,” in which malicious actors register the fictitious package names that AI systems commonly suggest. Even more concerning, 43% of these hallucinated packages appeared repeatedly across multiple prompts, which makes them predictable targets for attackers who track LLM behavior.
Vulnerable By Default
The risks extend beyond hallucinations. Multiple studies show that LLMs frequently generate insecure code. Veracode’s analysis of more than 100 LLMs found security flaws in 45 percent of the code they produced. Research from Endor Labs reached a similar conclusion, finding that only one in five dependency versions recommended by AI coding assistants were both safe and free from hallucination.
Tooling Vulnerabilities
The tools that support code generation bring their own issues as well. In March 2025, Pillar Security disclosed the “Rules File Backdoor,” a vulnerability affecting GitHub Copilot and Cursor. By inserting hidden unicode characters into configuration files, attackers can influence these assistants to produce malicious output that evades typical review processes. Instead of exploiting a flaw in a specific application, this technique manipulates the AI itself, turning commonly used tools into vectors for harmful code.
Large-scale cyberattacks have also now been executed using AI coding tools. Anthropic recently reported that Claude Code was manipulated by a Chinese state-sponsored group to target 30 organizations. Attackers had Claude bypass its own safety guardrails, perform reconnaissance, identify vulnerabilities, and write exploit code.
More traditional threats continue to surface too. For example, malicious npm packages named “@chatgptclaude_club/claude-code” were uploaded in an attempt to impersonate the official Anthropic CLI tool, showing how supply chain attacks evolve in parallel with the tools they aim to compromise.
So far, no publicly confirmed breach has been directly linked to hallucinated dependencies or AI-generated security flaws. Even so, the conditions for such an event have already taken shape, and development teams are paying attention. According to Stack Overflow’s 2025 Developer Study, 81 percent of respondents expressed concerns about security when using AI.
These concerns are amplified by the ways AI code generation tools introduce new weaknesses into development workflows.
How LLMs Amplify Supply Chain Risk
The security risks introduced by AI code generation fall into several interconnected categories that compound each other’s impact.
Training On Classic Vulnerability Patterns
LLMs do not write code by following secure development principles. They generate output based on the patterns found in their training data, which includes both safe and unsafe examples. Because they have no awareness of application context, deployment conditions, or security requirements, they often produce code that functions correctly but lacks the protections needed for real-world use.
Outdated, Vulnerable Dependencies
LLMs also inherit the limitations of their training cutoff dates. Every codebase eventually accumulates vulnerabilities, and many of these issues are discovered or patched after a model has already been trained. As a result, AI-generated code may recommend libraries that contain known CVEs. Even simple prompts can result in broad dependency trees, which means the attack surface of an application can grow quickly without the developer realizing it.
Slopsquatting Attacks
Slopsquatting builds on the tendency of AI systems to hallucinate package names. Once attackers identify which fictitious dependencies appear frequently in generated code, they can register packages under those same names in public repositories. This gives them an opportunity to deliver malicious payloads directly into projects that trust AI-generated recommendations.
Bypassing Security Protocols
Perhaps most critically, AI tools make it easier for both developers and non-developers to work around established security controls. With unrestricted access to enormous amounts of open source software, these tools can generate code that introduces unvetted components into development and production environments at unprecedented speed.
Traditional security processes are already struggling to keep pace with this level of automation and scale.
The Reactive Approach Isn’t Working
The traditional strategy of scanning code after it is written and fixing issues as they appear no longer fits the speed of AI-assisted development. As developers generate code faster than ever, security review processes often become bottlenecks.
The reactive model struggles for several reasons. First, vulnerability scanners produce long lists of issues that teams must investigate and resolve, yet they do nothing to prevent insecure components from entering the codebase in the first place. Second, developers relying on “vibe coding” workflows may never type or verify package names manually, choosing to trust AI suggestions without proper validation. Third, even when vulnerabilities are identified, remediation is slow. By the time a fix is applied, the vulnerable component may have spread across multiple projects and environments.
All of this raises an important question: Is there a practical way to govern which open-source packages generative tools are permitted to use? And even more importantly, can secure open source be delivered directly into the AI code generation process?
How ActiveState Can Help
The solution to AI-generated supply chain risk is not more scanning or better policies alone. It requires fundamentally changing what developers can access in the first place. ActiveState addresses this challenge through a curated catalog of over 40 million open source components, all rebuilt from source in a secure, hermetic build environment.
Rather than allowing developers (or AI tools) to pull arbitrary packages from public repositories, ActiveState’s curated catalog acts as a secure gateway for all open source consumption. The curated catalog can be broken down into three key components.
Your One Stop For Secure, Trusted Open Source
It starts with our mission to provide the industry’s largest repository of secure open source. While many teams tend to think of Perl and Python when they think ActiveState, in the past two years we’ve scaled our support across a variety of languages and ecosystems including Java, Node, Go and many more.
Rather than pulling from potentially compromised or hallucinated binaries, every component in our catalog is verified, scanned, and rebuilt from source code with complete provenance records. This ensures that every artifact has a verifiable chain of custody, satisfying high-security standards and compliance requirements without requiring organizations to build out complex infrastructure themselves.
Your developers get the packages they actually want, security achieves rapid compliance, and teams eliminate the risk of hallucinated packages, typosquatting, and dependency confusion attacks.
Secure Open-Source, Wherever Your Team Operates
With secure packages in place, the next crucial step is meeting development teams where they are already working. Using a curated catalog allows teams to skip yet another painful migration or adoption of new tooling.
Instead, secure open source is delivered directly into the tools your team is likely already using: CI/CD pipelines, artifact repositories, container registries, and of course, AI coding assistants. By keeping up with the evolving industry standards for AI integrations, ActiveState puts the necessary guardrails in place around GenAI code assistants. When an LLM suggests a package, developers pull from ActiveState’s curated catalog instead of unvetted public repositories.
Continually Monitored and Updated
Finally, none of this would matter if it wasn’t kept up to date and vulnerability free. We continuously monitor and update all open-source packages in the ActiveState catalog to ensure the latest patches have been applied.
If you are consuming a component and an update is available, that component will be rebuilt from source code using our build infrastructure and published back to your catalog to make the upgrade seamless.
Organizations using code generation tools face a choice: continue patching vulnerabilities after they’ve been introduced, or secure the foundation from which all code is built. ActiveState enables a proactive approach by ensuring that every open source component entering your development environment has already been vetted, secured, and built from trusted source code.
Taking Control of Your Supply Chain
Like Prometheus’s gift of fire, AI code generation is not going away. The productivity benefits are significant, and the competitive pressure to adopt these tools is strong. But the security risks are equally real, and organizations that ignore them do so at their peril.
Just as fire required humanity to develop new tools and practices to harness its power safely, AI-assisted development demands a new approach to security. By establishing a curated catalog as the foundation of your software supply chain, you close the door on slopsquatting, dependency confusion, and the entire category of attacks that exploit the gap between AI recommendations and secure reality. Your security team stops playing whack-a-mole with vulnerabilities and starts operating from a position of control.
The fire of AI-assisted coding is here to stay. The question is whether you will let it burn unchecked or channel it through secure infrastructure that protects your organization.
Get Started
Ready to secure your AI-assisted development workflow? Contact the ActiveState team to discuss how a curated catalog approach can protect your software supply chain.
Using Containers? Explore ActiveState’s Secure Container Catalog to browse our low-to-no CVE images for popular language runtimes and applications.


