"Moving to E5 has been really good from a security point of view... Now we can get a holistic view of whatโs going on, which helps us to make changes and recommendations for future plans."
IT Service Manager
Ian Harkess
Trusted by industry leaders
Kickstart Your FastTrack Journey
Fill out the short form below to express your interest in our FastTrack programme, and weโll be in touch soon.
Please note: A minimum of 150 enterprise licenses is required for FastTrack eligibility.
โWe needed to find solutions to a variety of issues whilst being a complex business, operating in a 24/7 environment. Stripe OLT listened and understood immediately the challenges we faced.โ
IT Operations Manager
Simon Darley
Trusted by industry leaders
Let's Talk
Call us on one of the numbers below, we cover the whole of the UK, so call the nearest office.
โWe needed to find solutions to a variety of issues whilst being a complex business, operating in a 24/7 environment. Stripe OLT listened and understood immediately the challenges we faced.โ
In this intel we set out to understand the changing landscape of trust, in a digital world where AI-generated code is permeating the software supply chain at scale and AI-enabled phishing is making adversarial infrastructure harder than ever to distinguish from the real thing.
โTrust must now be continuously verified.โ
The trust infrastructure is breaking
In 2025, the Thales/Imperva โBad Bot Reportโ noted 51% of all internet traffic is now machine-generated. The digital environment your organisation trusts is increasingly artificial. Two of the most consequential fronts in this emergent landscape are phishing and software.
Since the launch of ChatGPT, phishing messages have increased 4,151%, a figure from SlashNextโs 2024 State of Phishing Report. That trend shows little sign of slowing down, with Cofense reporting malicious emails arriving at a rate of one every 19 seconds as of early 2026, more than doubling the pace seen in 2024.
AI coding assistants are now used by 97% of developers according to GitHubโs 2024 survey, with Copilot alone generating 46% of all code for its 20 million active users. But this rapid adoption comes at a steep cost, as Veracodeโs 2025 report found that AI-generated code introduced security vulnerabilities in 45% of development tasks tested across more than 100 LLMs.
Caption: โThe scale of AI adoption and AI-enabled threat activity is reshaping the foundations of digital trust.โ
How is AI-generated code undermining security?
The scale of the problem
The vulnerability problem is not confined to a single tool, model or approach. Georgetown Universityโs CSET found that nearly half of LLM-generated code snippets had bugs that could lead to exploitation, and Veracode research confirmed this holds across more than 100 different LLMs, with both larger and smaller models vulnerable to the same pitfalls. The issue, as Veracodeโs CTO put it, is systemic.
In a study from Apiiro, which tightly monitored Fortune 50 companies using their security analysis platform, it was observed that AI-Generated code introduced 10,000+ new security findings per month by mid-2025, with privilege escalation paths up 322%.
The Cognitive Disconnect
The issue isnโt just the mechanics of producing AI-generated code โ we all know a tool is only as good as the one wielding it. Many of us can use AI to generate safe and secure code. The issue comes when we trust a tool to be correct without good reason, and robust safeguards are not in place.
The 2022 Stanford study โDo Users Write More Insecure Code with AI Assistants?โ by Perry et al. found that developers using AI assistants produced more vulnerabilities AND believed their code was more secure than developers who did their coding by hand. A dangerous false confidence effect.
It was reported that the startup EnrichLead made an error in 2025 after the founder boasted online that the platform had โzero hand-written codeโ and used Cursor AI for its software, when, within days of launch, the platform was exposed. API keys dangling in the frontend, no authentication and paywall bypasses possible.
Moltbook, a social networking site for AI agents entirely generated via vibe coding, had its database misconfigured and was exposed by security firm Wiz shortly after launch on January 26; 1.5 million authentication tokens and 35,000 email addresses leaked as a result.
We can chalk these types of purely AI-coded platform mishaps up to individuals getting ahead of themselves, but itโs important to revisit the earlier Stanford study, in which it was noted that developers trusted their AI-generated code to a greater degree than developers who had produced their code the old-fashioned way.
Exposed API keys aside, AI-generated code is managing to cause issues in more insidious ways, too. A newly coined term, โslopsquattingโ has popped up lately as a software supply chain attack vector. In a slopsquatting attack, an attacker creates malicious packages with names that AI coding assistants are likely to hallucinate, to get developers to install them.
Itโs important to note that slopsquatting is an emerging rather than established attack vector. Lasso Security researcher Bar Lanyado proved this path can be abused after they registered the hallucinated Python package name โhuggingface-cli’ after observing models suggest it; that harmless test package reportedly received more than 30,000 authentic downloads in three months, after the hallucinated install command appeared in public materials such as an Alibaba README.
How is AI making phishing infrastructure indistinguishable from the real thing?
The volume and sophistication shift
The increase in phishing cannot be overstated. A malicious email arrived every 19 seconds in late 2025, with more and more attackers using LLMs to piece together the constituent parts in the background. AI has enabled a novel and convincing phishing template to be generated in 5 minutes flat with just 5 prompts; IBMโs X-Force pits this against the 16 hours it takes a human social engineer. The barrier to entry has effectively disappeared.
Not only is a phishing email easily generated, but the underlying infrastructure is also undergoing a facelift, with 90%+ of phishing sites now supposedly supporting HTTPS & valid SSL certificates. The padlock that users have long treated as a safety signal is no longer a reliable indicator of legitimacy online. Letโs Encrypt offers free SSL certificates with no identity verification whatsoever; useful for setting up legitimate domains for services in your home lab or your weekend project, but a boon for threat actors.
The infrastructure looks real; the phishing URL you see is unique, and the files that malicious actors are pushing you to download are coming with never-before-seen hashes. This is AI-driven polymorphic variation at scale.
This sophistication doesnโt emerge from nothing. SpamGPT was discovered on underground forums in late 2025: a malicious LLM stripped of safety guardrails used specifically for spinning up and deploying phishing infrastructure. The model is used for crafting phishing emails, bulk SMTP management, custom header manipulation to forge SPF/DKIM/DMARC authentication records and real-time campaign analytics mirroring legitimate email marketing dashboards.
The ability for attackers to generate genuine, passing email authentication protocols with ease at scale is a thorn in the side of most organisations that have long been relying on these as the backbone of email trust verification.
Phishing-as-a-Service: The Darcula case study
Nothing paints the picture in higher resolution than Darcula, a sophisticated, well-documented PaaS platform with its origins in China, operating since at least March 2024. The platform offers pre-built fake websites, convincing facsimiles of things like USPS, banks, airlines and telecoms. Subscribers simply pick the site they want, and the platform spins the infrastructure up for them and provides a link with dispersal via iMessage, RCS or SMS at scale.
This was the operating model for Darcula pre-April 2025 โ since then, the platform has introduced generative AI into its offerings. Subscribers can now clone any website by providing a URL, generate phishing forms in any language, and deploy fully customised phishing sites within minutes, no coding required.
This change represents a generational leap in the democratisation of access to believable, well-crafted phishing. Non-technical criminals, who were previously restricted to around 200 templates of phishing sites, can now generate an essentially unbound number of real-looking clones of any available website.
Caption: โAttackers are able to use AI tools to create increasingly convincing phishing websites.โ
In January 2024, it was reported that an employee with the multinational Engineering company Arup received an email, which purported to be from the CFO. The email invited the reader to a Zoom call where every attendee was a real-time AI-generated deepfake of the familiar faces in the company. The Arup employee, convinced by the email and the faces before them, made 15 different transfers to 5 bank accounts totalling over $25 million.
This incident represents a growing attack vector businesses will need to start accounting for โ in Q1 of 2025 alone, deepfake incidents outstripped the prior yearโs total. The โDeepfake Trends 2024โ report from Sapio Research, via a survey of businesses online, reported that the average loss from one of these incidents was $450,000, a steep price.
Mitigations: what should organisations be doing?
For AI-generated code, risk mitigations include:
Treat AI-generated code as untrusted input. Mandate a human security review before the merge, regardless of source.
Implement automated SAST/DAST scanning in CI/CD pipelines, specifically calibrated for AI-generated code patterns.
Audit open-source dependencies for slop-squatting risk, verify every package name against official registries.
Establish a software supply chain governance policy that accounts for AI-generated contributions.
For AI-enabled phishing risks:
Move beyond perimeter-based email filtering. Deploy AI-powered detection that analyses behavioural patterns, not just signatures.
Enforce multi-factor verification for high-value transactions; a single data point is not enough for transferring away millions, like in the Arup incident.
Ensure users understand that SSL certificates and professionally designed websites do not necessarily indicate a legitimate or trustworthy service.
Conduct regular phishing simulations using AI-generated content to benchmark organisational resilience.
Why it matters
These threats are convergent; the same AI capability that allows a developer to ship a service in a week also allows a criminal to clone your banking website and deliver a seemingly genuine email to your inbox pushing you to give up your paycheck. The threat and the tool are identical; the difference is intent.
The trust signals organisations and individuals have relied on for the last twenty years are now manufacturable at scale. Merged pull requests, SSL certificates, professional looking domains and familiar faces on a Teams call are no longer reliable.
Trust must now be continuously verified.
If youโre looking for a team you can trust to help manage your cyber security posture, get in touch today.
This website uses cookies. By using this site you agree to our use of cookies. We use cookies to enhance your experience. To understand the specific cookies we use and how we handle your data, see out Cookie Policy, Privacy Policy and Terms & Conditions. Manage your preferences at any time by clicking the 'View Preferences' button.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.