โ€œWe needed to find solutions to a variety of issues whilst being a complex business, operating in a 24/7 environment. Stripe OLT listened and understood immediately the challenges we faced.โ€

IT Operations Manager
Simon Darley
Trusted by industry leaders

Let's Talk

Call us on one of the numbers below, we cover the whole of the UK, so call the nearest office.

BriSTOL HQ & The South West

London & Surrounding Areas

Manchester & the North

โ€œWe needed to find solutions to a variety of issues whilst being a complex business, operating in a 24/7 environment. Stripe OLT listened and understood immediately the challenges we faced.โ€

IT Operations Manager
Simon Darley
Trusted by industry leaders

Request a Call-back.

First we need a few details.

ENQUIRY - Contact Popup DEPRECIATED (#3)

Keep up to date with the experts

Get insights directly to your email inbox

MAIL LIST - Newsletter, Exit Intent Popup (#13)

Follow us on social

โ€œWe needed to find solutions to a variety of issues whilst being a complex business, operating in a 24/7 environment. Stripe OLT listened and understood immediately the challenges we faced.โ€

IT Operations Manager
Simon Darley
Trusted by industry leaders

Request a Call

First we need a few details.

ENQUIRY - Popup w/ Captcha for light backgrounds (#21)

The Dangers of Chatbots

Published: April 18, 2023
Updated: May 29, 2024
In a nutshell:
Following the news that Italy is the first European country to ban Chatbots over privacy concerns, a global discussion is taking place over how society can prioritise data protection and combat malicious groups that exploit vulnerabilities in Chatbots.
โ€˜With developments in Artificial Intelligence moving so fast, it is critical that users and organisations stay up to date on these new threats we are seeing emerge'

It seems that Chatbots are now embedded into every digital solution at our disposal, but despite the simplicity and ease this tool provides, cyber security concerns have been raised over the AIโ€™s ability to spread misinformation, aid hackers in developing malware and even present sensitive data leak threats.

Following the news that Italy is the first European country to ban Chatbots over privacy concerns, a global discussion is taking place over how society can prioritise data protection and combat malicious groups that exploit vulnerabilities in Chatbots.

The EU is currently writing the worldโ€™s first artificial intelligence legislation, aiming to regulate the development and practise of Chatbots, and ultimately protect the privacy of users. Italy now joins China, Iran, North Korea and Russia in banning this new wave of AI.

What are the potential dangers with chatbots?

The widespread popularity of AI Chatbots has presented a key opportunity for threat actors to extract possibly sensitive information. Aided by the conversational tone of most Chatbots, users feel more comfortable sharing private data. Artificial Intelligence Chatbot, Bard, has subsequently added an age restriction of 18+ to combat these privacy concerns.

Historically, chatbots havenโ€™t had the best rep – in 2020, Ticketmaster were fined ยฃ1.25 million by UK Watchdog for the companyโ€™s lack of security measures on their chatbot. Hackers breached Ticketmasters chatbot and leaked names, payment card numbers, expiry dates and CVV numbers, potentially affecting 9.4 million customers.

The one youโ€™ve probably all most recently heard of is the ChatGPT breach, in which users personal data was stolen by malicious groups. Italyโ€™s watchdog, in turn, has raised concerns over how OpenAI is processing user data and the purpose of storage.

OpenAI has affirmed that the purpose of ChatGPT is not to learn more about the users, but to learn about the world. A blog post from the tech company goes on to say that the data stored is not used for โ€œselling our services, advertising, or building profiles of people โ€” we use data to make our models more helpful for peopleโ€.

Being helpful to us humans, however, is sometimes part of the risk โ€” letโ€™s take language as an example. Advancements in artificial intelligence may have enabled faster and more accurate language translations, but in turn, users will find it harder to spot well-known red flags such as inaccurate spelling and poor grammar.

Accurate translation would then subsequently lend itself well to scammers re-using phishing emails, and sending on mass in many languages, to bypass spam blockers put in place by organisations โ€“ just one of many potential risks.

The Chatbot Battlefield: Growing Cyber Security Threats

For both the public and private security sectors, thereโ€™s a responsibility to ensure this ever-chaging ai battlefield is kept under control – evolving risks include:

Malware and Ransomware: Hackers have been known to infiltrate Chatbotโ€™s through backdoor deployments, to spread malware or ransomware onto unsuspecting victims. Backdoor deployment refers to an attack type whereby hackers can bypass security measures through โ€˜backdoorsโ€™.

Backdoors are unintentionally built by developers to easily access systems and fix software problems, but backdoors can also be created and exploited by malicious actors using malware. This in turn allows a malicious actor to gain access to data, critical systems and spread other malicious other assetsโ€ฆ

Data theft through poor encryption: If chatbotโ€™s are not properly encrypting a userโ€™s data, there is a risk of bad actors accessing this private data and using it to their advantage.

The UK Watchdog recently  issued a warning to tech companies developing Chatbots, stating their concerns over the masses of unfiltered personal data being stored to โ€˜trainโ€™ the artificial intelligence.

Chatbot impersonation: Hackers have been known to replicate well-known Chatbot landing pages, which as we all well know, can lead to a range of malicious activities.

More specifically, these pages commonly request personal information, like credit card details and credentials. Recently, a fake ChatGPT landing page asked users to โ€˜sign upโ€™ for their Ai services – in reality victims were downloading a piece of trojan malware named FOBO.

How Do I use Chatbots Securely?

โ€˜With developments in Artificial Intelligence moving so fast, it is critical that users and organisations stay up to date on these new threats we are seeing emerge. Due to the appeal and accessibility of Chatbots, malicious actors have been able to exploit a variety of vulnerabilities and elicit sensitive and potentially dangerous information.โ€™

–          Andrea Csuri, SOC Analyst

Small changes make a big difference:

  1. Educate your teams on the dangers of Chatbots. Adopting a Zero-trust mindset goes a long way, and the most effective way to defend against cyber-attacks is to provide regular security user awareness training.
  2. Easy to implement advice from Infosec experts is to never share any personal or business details with a Chatbot – you never know who might be working behind the scenes, and what they might do with your sensitive information.
  3. Itโ€™s crucial that users only utilise Chatbots that they have navigated to independently. Certainly not through the links sent via unknown emails or messages.
  4. Consider masking IP addresses through a VPN such as NordVPN or ExpressVPN. This will hide geographical locations, obstruct web tracking and leave no digital footprint, making it harder for malicious actors to gain sensitive data from a userโ€™s online.

Want to know more about how your organisation can safely navigate online?

For organisations that want to know more about their existing response to threats, talk to us today about our breach & attack simulation services. From phishing simulation tests to breached credential checks, our team are here to help your team adopt a modern age approach, to modern age threats.

Alternately, for organisations that want to take a proactive approach and build awareness around these growing risks, you can opt for User Awareness and Security Training. Offering two separate courses, we make sure both your workforce and senior stakeholders are prepared when dealing with adversaries.