"Moving to E5 has been really good from a security point of view... Now we can get a holistic view of what’s going on, which helps us to make changes and recommendations for future plans."
IT Service Manager
Ian Harkess
Trusted by industry leaders
Kickstart Your FastTrack Journey
Fill out the short form below to express your interest in our FastTrack programme, and we’ll be in touch soon.
Please note: A minimum of 150 enterprise licenses is required for FastTrack eligibility.
“We needed to find solutions to a variety of issues whilst being a complex business, operating in a 24/7 environment. Stripe OLT listened and understood immediately the challenges we faced.”
IT Operations Manager
Simon Darley
Trusted by industry leaders
Let's Talk
Call us on one of the numbers below, we cover the whole of the UK, so call the nearest office.
“We needed to find solutions to a variety of issues whilst being a complex business, operating in a 24/7 environment. Stripe OLT listened and understood immediately the challenges we faced.”
The Slopsquatting Trap: How AI Mistakes Are Weaponized by Hackers
Published: May 22, 2025
Expert: Harry Swain
Role: SOC Senior
Specialises in: Security Operations
What you will learn:
Discover how the rise of AI coding tools has given way to a new threat called slopsquatting and practical steps your developers and organisations can take to defend itself against this emerging social engineering threat.
“Trusting AI generated code blindly isn’t just risky, it’s an open door to your software supply chain, and by extension, your users.”
Table of Contents
What is Slopsquatting?
Slopsquatting is a newly emerging attack vector gaining traction alongside the rise of AI coding tools like GitHub Copilot and ChatGPT. In simple terms, it occurs when attackers exploit hallucinated package names – that is, fake or non-existent libraries suggested by AI tools – to deliver malware and compromise the software supply chain.
This technique is a twist on typosquatting. But instead of relying on human typing errors, slopsquatting leverages the mistakes made by AI. The term itself combines “slop” (a derogatory reference to low-quality AI output) with “squatting” (the act of occupying a name or resource for malicious use).
Slopsquatting was recently analysed in depth by researchers from the University of Texas at San Antonio, Virginia Tech, and the University of Oklahoma, who reviewed how AI hallucinations are exploited to inject malicious packages into software supply chains (Spracklen et al., 2025). This intel draws on that research to provide a clear, practical summary of the threat and how to defend against it – keep reading for more.
How Slopsquatting Works
To understand how attackers take advantage of this AI-driven threat, it’s helpful to look at how slopsquatting typically plays out in the wild…
AI Hallucination: When developers use AI coding assistants such as GitHub Copilot, ChatGPT, or open-source LLMs like CodeLlama, these tools may recommend fictitious package names that sound plausible but don’t exist in repositories like PyPI (Python Package Index) or npm (Node Package Manager).
Malicious Registration: Attackers monitor these hallucinated package names, register them in public package registries, and upload malicious code under those names.
Exploitation: Developers, trusting the AI’s suggestions, unknowingly install these malicious packages, which can steal data, deploy malware, or compromise software supply chains.
Image Reference: We Have a Package for You! A Comprehensive Analysis of Package Hallucinations by Code Generating LLMs
Key Characteristics of SlopSquating
The 2025 research from the University of Texas at San Antonio, Virginia Tech, and the University of Oklahoma sheds light on why Slopsquatting is so concerning, and these core characteristics help explain its appeal to attackers and the risks to development teams.
Repeatability: Studies show that 43% of hallucinated package names are consistently repeated across multiple AI runs, making them predictable targets for attackers.
Scale: The 2025 research paper found that 20% of 576,000 AI-generated Python and JavaScript code samples included non-existent packages, with open-source LLMs hallucinating at a higher rate (21.7%) than commercial models (5.2%).
Plausibility: Hallucinated names often mimic legitimate packages, increasing the likelihood of developer trust.
Recent Slopsquatting Cases and Trends
While no large-scale slopsquatting attacks have been widely reported as of April 2025, the threat is gaining attention:
Research Evidence: A March: The 2025 academic paper mentioned analysed 16 AI coding models and identified over 200,000 unique hallucinated package names, highlighting the potential attack surface.
Industry Warnings: Firms in the software package squatting domain have noted an increase in related incidents over the past year and this is only sent to increase with further adoption of AI.
Related Incidents: In January 2025, Socket researchers identified a malicious npm package (@async-mutex/mutex) that typosquatted the legitimate async-mutex, showing how AI can amplify existing squatting risks.
Image Reference: We Have a Package for You! A Comprehensive Analysis of Package Hallucinations by Code Generating LLMs
Slopsquatting Mitigation Strategies
Fortunately, there are practical steps developers and organisations can take to reduce the risk. Below are five key mitigation strategies informed by current research and best practices.
Verify Packages: Always check package names against official repositories and review their source code before installation.
Use Security Tools: Employ dedicated tools to identify and mitigate malicious or suspicious open-source dependencies. Solutions like Endor Labs offer visibility into risky packages and help detect malicious behaviours early in the development lifecycle. Checkmarx provides runtime-aware analysis to flag dependencies with hidden threats, while Socket continuously monitors packages for indicators of compromise – such as unexpected network activity or obfuscated code – to prevent supply chain attacks like slopsquatting.
Developer Education: Train developers to recognize AI hallucinations and avoid blind trust in AI-generated code.
Repository Monitoring: Package registries should proactively blacklist commonly hallucinated names to prevent registration.
Lower AI Temperature: Reducing the “temperature” setting in LLMs can decrease randomness and lower hallucination rates.
Their agility allows them to bypass many standard security controls, especially where user trust and session persistence are not well managed.
The Main Takeaway
Slopsquatting may seem niche, but it Slopsquatting is particularly dangerous because it exploits the growing reliance on AI coding tools in fast-paced development environments.
With “vibe coding” where developers describe tasks and let AI generate code, the risk of installing unverified packages increases. If a widely recommended hallucinated package is weaponised, it could lead to widespread supply chain attacks, potentially compromising financial institutions, critical infrastructure, or sensitive data.
As AI adoption accelerates, so does the attack surface. Now is the time for development teams, CISOs, and security leads to get ahead of the risk.
Looking for cyber security support for your organisation?Book a free discovery session with us, our experts are here to help – no jargon, just clear, strategic guidance.
This website uses cookies. By using this site you agree to our use of cookies. We use cookies to enhance your experience. To understand the specific cookies we use and how we handle your data, see out Cookie Policy, Privacy Policy and Terms & Conditions. Mange your preferences at any time by clicking the 'View Preferences' button.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.