An analysis of the most pressing concerns based on insights from 1,000 UK business leaders.
Author: Johnty Mongan
In this article, we explore:
- How AI is supercharging common cyber-attacks.
- AI-generated content and the tell-tale signs of a deepfake.
- AI as a tool to enhance cybersecurity.
- Strategies to mitigate AI-powered threats.
AI as the hacker’s sidekick
Hackers are always keen to add more sophisticated weapons to their armoury and AI tools are assisting in their techniques. Here are the key types of attacks where AI is making life easier for cybercriminals.
Ransomware attacks: Cybercriminals can use AI to automate the identification of vulnerable systems, select potential targets, and optimise the encryption process — significantly increasing the scale and efficiency of their attacks. The use of AI is also making cloud infrastructure more vulnerable to threats like data exfiltration. Furthermore, AI algorithms can analyse victims’ behaviour and tailor ransom demands accordingly.
Social engineering: AI algorithms can analyse large amounts of data to make phishing emails, chatbots, and other message-based communication seem more authentic than ever before. Messages, responses, and tactics can be tailored to each target, increasing the chances of successful exploitation.
Credential stuffing: Credential stuffing is a cyber-attack in which stolen account credentials are used. AI algorithms automate the process of testing these stolen username and password combinations across multiple platforms, allowing cybercriminals to quickly identify valid credentials and gain unauthorised access to user accounts.
Malware: AI algorithms can analyse security systems, identify vulnerabilities, and modify malware code to evade detection. This allows cybercriminals to launch sophisticated attacks that can bypass traditional antivirus software and intrusion detection systems, making malware more difficult to protect against.
Distributed denial of service (DDoS) attacks: AI-powered botnets can orchestrate large-scale distributed DDoS attacks. These botnets leverage AI algorithms to identify and exploit vulnerabilities in target systems, coordinating a massive influx of traffic to overwhelm servers and disrupt services. AI enables attackers to dynamically adjust attack patterns, making it harder for defenders to mitigate the impact. Businesses should prioritise investing in AI detection software and threat intelligence updates to stay ahead of emerging threats.
AI-generated content: spotting the deepfakes
AI-generated fake content — whether text, audio, or video — is a growing problem for organisations, with the main concerns including misinformation, disinformation, and impersonation of executives.
Impersonation will often start by creating an audio deepfake of a respected individual within the company. The perpetrator, posing as this person, initiates contact through web conferencing or voicemail and proceeds to employ various social engineering tactics like business email compromise or dynamic voice manipulation. By creating a sense of urgency, they can coerce employees into divulging funds or sensitive information.
Similarly, deepfake videos are becoming more sophisticated and believable, taking this risk to a new level. In one example, a finance worker at a multinational firm believed a video conference to be legitimate because the CFO and everyone else in attendance looked and sounded like known colleagues. He was duped into making a $25 million fraudulent payment1.
Organisations must urge employees to be alert when receiving urgent calls or messages and seek clarification for payment or data requests if in doubt.
How to Spot a Deepfake | |
---|---|
Audio
|
Video
|
AI-powered cybersecurity and tools
On a positive note, AI can be harnessed to improve cybersecurity by continuously learning from threats and updating cyber threat intelligence.
It can assist in tasks including writing patch code and providing insights on common vulnerabilities and exposures. AI-enhanced cybersecurity tools, such as intrusion detection and prevention software, network security, user behaviour analytics, and phishing protection, are all assets businesses should consider in the fight against cybercrime.
AI’s ability to detect and defend against AI-related attacks also means it can alleviate skills gaps and talent shortages in cybersecurity.
Mitigating organisational threats from AI
One of the first things we say to our clients is to treat AI like a stranger to your business. As useful as the tool may be, it can bring unforeseen threats so it is vital to ensure the organisation has the correct usage and policy frameworks in place.
It is important to ensure cloud networks are configured to the highest security standards and Multi-Factor Authentication (MFA) is used across the organisation, along with regular threat intelligence updates. AI training for end users and IT staff is a must, covering topics like fake content, phishing, and utilising AI securely and appropriately.
AI is a fast-evolving area of cybersecurity and can seem daunting. One way in which Gallagher is helping organisations strengthen their cybersecurity is through Gallagher’s Cyber Defence Centre, a suite of services including vulnerability scanning, threat intelligence webinars, access to a virtual CISO and more. This is an ongoing package of support and is available here to explore as a one-month free trial*.
We can also conduct an open-source intelligence search to double-check what is currently known about your organisation’s network and potential vulnerabilities. Please contact us for details.