hlk_logo

"Moving to E5 has been really good from a security point of view... Now we can get a holistic view of whatโ€™s going on, which helps us to make changes and recommendations for future plans."

IT Service Manager
Ian Harkess
Trusted by industry leaders
NHS Confederation Logo

Kickstart Your FastTrack Journey

Fill out the short form below to express your interest in our FastTrack programme, and weโ€™ll be in touch soon.

Please note: A minimum of 150 enterprise licenses is required for FastTrack eligibility.

โ€œWe needed to find solutions to a variety of issues whilst being a complex business, operating in a 24/7 environment. Stripe OLT listened and understood immediately the challenges we faced.โ€

IT Operations Manager
Simon Darley
Trusted by industry leaders

Let's Talk

Call us on one of the numbers below, we cover the whole of the UK, so call the nearest office.

BriSTOL HQ & The South West

London & Surrounding Areas

Manchester & the North

Keep up to date with the experts

Get insights directly to your email inbox

MAIL LIST - Newsletter, Exit Intent Popup (#13)

Follow us on social

โ€œWe needed to find solutions to a variety of issues whilst being a complex business, operating in a 24/7 environment. Stripe OLT listened and understood immediately the challenges we faced.โ€

IT Operations Manager
Simon Darley
Trusted by industry leaders
NHS Confederation Logo White

Request a Call

First we need a few details.

ENQUIRY - Popup w/ Captcha for light backgrounds (#21)
Expert Intel

AI and the Erosion of Digital Trust: How AI Is Reshaping Code and Phishing Risk

Published: April 1, 2026
Updated: April 02, 2026

Joss Moor's profile picture

Expert: Joss Moor

Role: SOC Analayst

Specialises in: Response Specialist

What you will learn:
In this intel we set out to understand the changing landscape of trust, in a digital world where AI-generated code is permeating the software supply chain at scale and AI-enabled phishing is making adversarial infrastructure harder than ever to distinguish from the real thing.
โ€œTrust must now be continuously verified.โ€

The trust infrastructure is breaking

In 2025, the Thales/Imperva โ€˜Bad Bot Reportโ€™ noted 51% of all internet traffic is now machine-generated. The digital environment your organisation trusts is increasingly artificial. Two of the most consequential fronts in this emergent landscape are phishing and software.

Since the launch of ChatGPT, phishing messages have increased 4,151%, a figure from SlashNextโ€™s 2024 State of Phishing Report. That trend shows little sign of slowing down, with Cofense reporting malicious emails arriving at a rate of one every 19 seconds as of early 2026, more than doubling the pace seen in 2024.

AI coding assistants are now used by 97% of developers according to GitHubโ€™s 2024 survey, with Copilot alone generating 46% of all code for its 20 million active users. But this rapid adoption comes at a steep cost, as Veracodeโ€™s 2025 report found that AI-generated code introduced security vulnerabilities in 45% of development tasks tested across more than 100 LLMs.

Caption: โ€œThe scale of AI adoption and AI-enabled threat activity is reshaping the foundations of digital trust.โ€

How is AI-generated code undermining security?

The scale of the problem

The vulnerability problem is not confined to a single tool, model or approach. Georgetown Universityโ€™s CSET found that nearly half of LLM-generated code snippets had bugs that could lead to exploitation, and Veracode research confirmed this holds across more than 100 different LLMs, with both larger and smaller models vulnerable to the same pitfalls. The issue, as Veracodeโ€™s CTO put it, is systemic.

In a study from Apiiro, which tightly monitored Fortune 50 companies using their security analysis platform, it was observed that AI-Generated code introduced 10,000+ new security findings per month by mid-2025, with privilege escalation paths up 322%.

The Cognitive Disconnect

The issue isnโ€™t just the mechanics of producing AI-generated code โ€“ we all know a tool is only as good as the one wielding it. Many of us can use AI to generate safe and secure code. The issue comes when we trust a tool to be correct without good reason, and robust safeguards are not in place.

The 2022 Stanford study โ€œDo Users Write More Insecure Code with AI Assistants?โ€ by Perry et al. found that developers using AI assistants produced more vulnerabilities AND believed their code was more secure than developers who did their coding by hand. A dangerous false confidence effect.

Caption: โ€œAI-assisted development tools such as GitHub Copilot can be highly beneficial, but overconfidence in these tools increases security risk.โ€
Source:
The GitHub Blog | GitHub Copilot X: The AI-powered developer experience

Real-world incidents

It was reported that the startup EnrichLead made an error in 2025 after the founder boasted online that the platform had โ€œzero hand-written codeโ€ and used Cursor AI for its software, when, within days of launch, the platform was exposed. API keys dangling in the frontend, no authentication and paywall bypasses possible.

Moltbook, a social networking site for AI agents entirely generated via vibe coding, had its database misconfigured and was exposed by security firm Wiz shortly after launch on January 26; 1.5 million authentication tokens and 35,000 email addresses leaked as a result.

We can chalk these types of purely AI-coded platform mishaps up to individuals getting ahead of themselves, but itโ€™s important to revisit the earlier Stanford study, in which it was noted that developers trusted their AI-generated code to a greater degree than developers who had produced their code the old-fashioned way.

Exposed API keys aside, AI-generated code is managing to cause issues in more insidious ways, too. A newly coined term, โ€œslopsquattingโ€ has popped up lately as a software supply chain attack vector. In a slopsquatting attack, an attacker creates malicious packages with names that AI coding assistants are likely to hallucinate, to get developers to install them.

Itโ€™s important to note that slopsquatting is an emerging rather than established attack vector. Lasso Security researcher Bar Lanyado proved this path can be abused after they registered the hallucinated Python package name โ€˜huggingface-cli’ after observing models suggest it; that harmless test package reportedly received more than 30,000 authentic downloads in three months, after the hallucinated install command appeared in public materials such as an Alibaba README.

How is AI making phishing infrastructure indistinguishable from the real thing?

The volume and sophistication shift

The increase in phishing cannot be overstated. A malicious email arrived every 19 seconds in late 2025, with more and more attackers using LLMs to piece together the constituent parts in the background. AI has enabled a novel and convincing phishing template to be generated in 5 minutes flat with just 5 prompts; IBMโ€™s X-Force pits this against the 16 hours it takes a human social engineer. The barrier to entry has effectively disappeared.

Not only is a phishing email easily generated, but the underlying infrastructure is also undergoing a facelift, with 90%+ of phishing sites now supposedly supporting HTTPS & valid SSL certificates. The padlock that users have long treated as a safety signal is no longer a reliable indicator of legitimacy online. Letโ€™s Encrypt offers free SSL certificates with no identity verification whatsoever; useful for setting up legitimate domains for services in your home lab or your weekend project, but a boon for threat actors.

The infrastructure looks real; the phishing URL you see is unique, and the files that malicious actors are pushing you to download are coming with never-before-seen hashes. This is AI-driven polymorphic variation at scale.

This sophistication doesnโ€™t emerge from nothing. SpamGPT was discovered on underground forums in late 2025: a malicious LLM stripped of safety guardrails used specifically for spinning up and deploying phishing infrastructure. The model is used for crafting phishing emails, bulk SMTP management, custom header manipulation to forge SPF/DKIM/DMARC authentication records and real-time campaign analytics mirroring legitimate email marketing dashboards.

The ability for attackers to generate genuine, passing email authentication protocols with ease at scale is a thorn in the side of most organisations that have long been relying on these as the backbone of email trust verification.

Phishing-as-a-Service: The Darcula case study

Nothing paints the picture in higher resolution than Darcula, a sophisticated, well-documented PaaS platform with its origins in China, operating since at least March 2024. The platform offers pre-built fake websites, convincing facsimiles of things like USPS, banks, airlines and telecoms. Subscribers simply pick the site they want, and the platform spins the infrastructure up for them and provides a link with dispersal via iMessage, RCS or SMS at scale.

This was the operating model for Darcula pre-April 2025 โ€“ since then, the platform has introduced generative AI into its offerings. Subscribers can now clone any website by providing a URL, generate phishing forms in any language, and deploy fully customised phishing sites within minutes, no coding required.

This change represents a generational leap in the democratisation of access to believable, well-crafted phishing. Non-technical criminals, who were previously restricted to around 200 templates of phishing sites, can now generate an essentially unbound number of real-looking clones of any available website.

Caption: โ€œAttackers are able to use AI tools to create increasingly convincing phishing websites.โ€

The scale of the operation is documented by the British Internet security company Netcraft, who have taken down 25,000+ fake Darcula sites and flagged 90,000+ phishing domains. We also note that in a joint investigation by NRK, Bayerischer Rundfunk, Le Monde, and Norwegian security firm Mnemonic, it was revealed that 884,000 sets of card details were stolen via over 13 million malicious links over a seven-month timespan between 2023 and 2024. Critically, these figures predate Darculaโ€™s April 2025 generative AI integration entirely.

Real World Incident: the Arup deepfake heist

In January 2024, it was reported that an employee with the multinational Engineering company Arup received an email, which purported to be from the CFO. The email invited the reader to a Zoom call where every attendee was a real-time AI-generated deepfake of the familiar faces in the company. The Arup employee, convinced by the email and the faces before them, made 15 different transfers to 5 bank accounts totalling over $25 million.

This incident represents a growing attack vector businesses will need to start accounting for โ€“ in Q1 of 2025 alone, deepfake incidents outstripped the prior yearโ€™s total. The โ€œDeepfake Trends 2024โ€ report from Sapio Research, via a survey of businesses online, reported that the average loss from one of these incidents was $450,000, a steep price.

Mitigations: what should organisations be doing?

For AI-generated code, risk mitigations include:

  1. Treat AI-generated code as untrusted input. Mandate a human security review before the merge, regardless of source.
  2. Implement automated SAST/DAST scanning in CI/CD pipelines, specifically calibrated for AI-generated code patterns.
  3. Audit open-source dependencies for slop-squatting risk, verify every package name against official registries.
  4. Establish a software supply chain governance policy that accounts for AI-generated contributions.

For AI-enabled phishing risks:

  1. Move beyond perimeter-based email filtering. Deploy AI-powered detection that analyses behavioural patterns, not just signatures.
  2. Enforce multi-factor verification for high-value transactions; a single data point is not enough for transferring away millions, like in the Arup incident.
  3. Ensure users understand that SSL certificates and professionally designed websites do not necessarily indicate a legitimate or trustworthy service.
  4. Conduct regular phishing simulations using AI-generated content to benchmark organisational resilience.

Why it matters

These threats are convergent; the same AI capability that allows a developer to ship a service in a week also allows a criminal to clone your banking website and deliver a seemingly genuine email to your inbox pushing you to give up your paycheck. The threat and the tool are identical; the difference is intent.

The trust signals organisations and individuals have relied on for the last twenty years are now manufacturable at scale. Merged pull requests, SSL certificates, professional looking domains and familiar faces on a Teams call are no longer reliable.

Trust must now be continuously verified.


If youโ€™re looking for a team you can trust to help manage your cyber security posture, get in touch today.

Our latest expert Intel

  • April 2, 2026
    Read full article
  • Cyber Background
    March 24, 2026
    Read full article
  • notepad compromise
    April 1, 2026
    Read full article
  • M365
    February 3, 2026
    Read full article
  • Person using a laptop with the Google search homepage open
    February 3, 2026
    Read full article
  • January 20, 2026
    Read full article
  • A professional man holds a "Stripe OLT" branded coffee mug in a modern office environment.
    October 27, 2025
    Read full article
  • October 9, 2025
    Read full article
  • October 9, 2025
    Read full article
  • stock-neon
    December 2, 2025
    Read full article
  • August 5, 2025
    Read full article
  • Windows 10 Wallpaper
    January 16, 2026
    Read full article