“We needed to find solutions to a variety of issues whilst being a complex business, operating in a 24/7 environment. Stripe OLT listened and understood immediately the challenges we faced.”
It seems that Chatbots are now embedded into every digital solution at our disposal, but despite the simplicity and ease this tool provides, cyber security concerns have been raised over the AI’s ability to spread misinformation, aid hackers in developing malware and even present sensitive data leak threats.
Following the news that Italy is the first European country to ban Chatbots over privacy concerns, a global discussion is taking place over how society can prioritise data protection and combat malicious groups that exploit vulnerabilities in Chatbots.
The EU is currently writing the world’s first artificial intelligence legislation, aiming to regulate the development and practise of Chatbots, and ultimately protect the privacy of users. Italy now joins China, Iran, North Korea and Russia in banning this new wave of AI.
What are the potential dangers with chatbots?
The widespread popularity of AI Chatbots has presented a key opportunity for threat actors to extract possibly sensitive information. Aided by the conversational tone of most Chatbots, users feel more comfortable sharing private data. Artificial Intelligence Chatbot, Bard, has subsequently added an age restriction of 18+ to combat these privacy concerns.
Historically, chatbots haven’t had the best rep – in 2020, Ticketmaster were fined £1.25 million by UK Watchdog for the company’s lack of security measures on their chatbot. Hackers breached Ticketmasters chatbot and leaked names, payment card numbers, expiry dates and CVV numbers, potentially affecting 9.4 million customers.
The one you’ve probably all most recently heard of is the ChatGPT breach, in which users personal data was stolen by malicious groups. Italy’s watchdog, in turn, has raised concerns over how OpenAI is processing user data and the purpose of storage.
OpenAI has affirmed that the purpose of ChatGPT is not to learn more about the users, but to learn about the world. A blog post from the tech company goes on to say that the data stored is not used for “selling our services, advertising, or building profiles of people — we use data to make our models more helpful for people”.
Being helpful to us humans, however, is sometimes part of the risk — let’s take language as an example. Advancements in artificial intelligence may have enabled faster and more accurate language translations, but in turn, users will find it harder to spot well-known red flags such as inaccurate spelling and poor grammar.
Accurate translation would then subsequently lend itself well to scammers re-using phishing emails, and sending on mass in many languages, to bypass spam blockers put in place by organisations – just one of many potential risks.
The Chatbot Battlefield: Growing Cyber Security Threats
For both the public and private security sectors, there’s a responsibility to ensure this ever-chaging ai battlefield is kept under control – evolving risks include:
Malware and Ransomware: Hackers have been known to infiltrate Chatbot’s through backdoor deployments, to spread malware or ransomware onto unsuspecting victims. Backdoor deployment refers to an attack type whereby hackers can bypass security measures through ‘backdoors’.
Backdoors are unintentionally built by developers to easily access systems and fix software problems, but backdoors can also be created and exploited by malicious actors using malware. This in turn allows a malicious actor to gain access to data, critical systems and spread other malicious other assets…
Data theft through poor encryption: If chatbot’s are not properly encrypting a user’s data, there is a risk of bad actors accessing this private data and using it to their advantage.
The UK Watchdog recently issued a warning to tech companies developing Chatbots, stating their concerns over the masses of unfiltered personal data being stored to ‘train’ the artificial intelligence.
Chatbot impersonation: Hackers have been known to replicate well-known Chatbot landing pages, which as we all well know, can lead to a range of malicious activities.
More specifically, these pages commonly request personal information, like credit card details and credentials. Recently, a fake ChatGPT landing page asked users to ‘sign up’ for their Ai services – in reality victims were downloading a piece of trojan malware named FOBO.
How Do I use Chatbots Securely?
‘With developments in Artificial Intelligence moving so fast, it is critical that users and organisations stay up to date on these new threats we are seeing emerge. Due to the appeal and accessibility of Chatbots, malicious actors have been able to exploit a variety of vulnerabilities and elicit sensitive and potentially dangerous information.’
– Andrea Csuri, SOC Analyst
Small changes make a big difference:
- Educate your teams on the dangers of Chatbots. Adopting a Zero-trust mindset goes a long way, and the most effective way to defend against cyber-attacks is to provide regular security user awareness training.
- Easy to implement advice from Infosec experts is to never share any personal or business details with a Chatbot – you never know who might be working behind the scenes, and what they might do with your sensitive information.
- It’s crucial that users only utilise Chatbots that they have navigated to independently. Certainly not through the links sent via unknown emails or messages.
- Consider masking IP addresses through a VPN such as NordVPN or ExpressVPN. This will hide geographical locations, obstruct web tracking and leave no digital footprint, making it harder for malicious actors to gain sensitive data from a user’s online.
Want to know more about how your organisation can safely navigate online?
For organisations that want to know more about their existing response to threats, talk to us today about our breach & attack simulation services. From phishing simulation tests to breached credential checks, our team are here to help your team adopt a modern age approach, to modern age threats.
Alternately, for organisations that want to take a proactive approach and build awareness around these growing risks, you can opt for User Awareness and Security Training. Offering two separate courses, we make sure both your workforce and senior stakeholders are prepared when dealing with adversaries.