The introduction of Artificial intelligence (AI) to cyberspace has brought incredible transformation but has also given cybercriminals a major advantage. Leveraging AI, cybercriminals can now craft personalised and convincing communications, facilitating the execution of sophisticated phishing and social engineering attacks.
Getting your Trinity Audio player ready...
null

In recent years, a surge in AI-driven phishing and social engineering attacks has threatened individuals and organisations. This trend is backed up by a Gallagher report, which revealed that 75% of security professionals saw an uptick in attacks, with 85% attributing the rise to bad actors using generative AI.

The mechanics of AI-driven attacks

Traditional phishing attempts often relied on a scattergun approach, whereby generic messaging was sent to a broad audience, hoping to deceive just a fraction of recipients. By comparison, AI-driven phishing attacks are more sophisticated and tailored to various situations. Such phishing attacks leverage AI and deep learning algorithms to analyse social media activity, online conversations and personal data to make the attack more targeted and convincing.

By gaining access to private information, attackers can create tailored messages that mimic the language, tone and style of a trusted contact or reputable organisation. In addition, AI-enabled computers permit integration with chatbots, simulating human interactions capable of deceiving even the most vigilant staff members. According to the Phishing Threat Trends Report by Egress, nearly 71% of AI detectors fail to identify phishing emails generated by AI chatbot software1.

Cybercriminals can also produce deepfake audio and video content impersonating executives or other trusted figures to manipulate victims into revealing sensitive information or authorising fraudulent transactions. These sophisticated business email compromises can be convincing because they employ emotional manipulation strategies powerful enough to ensure success.

AI-driven phishing attacks can take several sophisticated forms, including:

  • Vishing (voice phishing): Attackers use deep learning technology to create realistic voice clones to impersonate trusted individuals over the phone.
  • Spear phishing: Spear phishing is highly targeted. Attackers gather detailed personal and professional information to craft convincing emails that appear legitimate. AI significantly enhances the effectiveness of spear phishing by automating personalisation at scale.
  • Deepfake attacks: This form of phishing employs deepfake audio and video to impersonate real individuals. Cybercriminals typically use AI-generated deepfake videos or voice recordings to manipulate victims into complying with fraudulent requests.

Challenges in detecting and preventing AI-driven phishing attacks

  • Personalisation: AI-driven attacks bypass traditional security filters that rely on generic threat signatures by tailoring messages to individual characteristics and contexts.
  • False positives and negatives: AI systems may generate false positives, which occur when the system incorrectly identifies a legitimate activity as a threat or false negatives, allowing attacks to go undetected.
  • Volume and speed: AI enables attackers to automate the creation and distribution of phishing content on a massive scale, overwhelming conventional defence mechanisms.
  • Evasion techniques: Advanced AI can adapt to security protocols, learning to avoid detection by mimicking legitimate communication patterns.

Impact on organisations

According to a recent report by the UK National Cyber Security Centre (NCSC), AI-driven phishing attacks have increased by 30% over the past year2. Organisations across a wide range of sectors face severe repercussions from AI-driven phishing and social engineering attacks. These include:

  • Financial losses: Successful attacks can result in direct monetary losses through fraudulent transactions, or the costs associated with mitigating breaches. In the UK alone, businesses are estimated to lose over £1 billion annually due to phishing attacks2, with AI-driven methods contributing significantly to this figure.
  • Reputational damage: Breaches erode trust among customers, partners, and stakeholders, potentially leading to long-term revenue declines.
  • Operational disruptions: Attacks can compromise critical systems, leading to downtime and reduced productivity.

With AI-driven phishing attacks becoming more prevalent, organisations must now adopt proactive measures to safeguard their assets and data.

Defensive strategies and future outlook

To combat the growing threat of AI-driven cyber-attacks, organisations must implement robust cybersecurity strategies, such as:

  • Implementing zero-trust architecture: This approach ensures that all users, devices and applications are continuously verified before access is granted.
  • Enabling multi-factor authentication (MFA): Requiring multiple verification forms reduce the chances of unauthorised access, even if credentials are compromised.
  • Enhancing threat intelligence: Using AI to analyse and predict potential threats can help organisations stay ahead of attackers.
  • User education and awareness: Regular training programs can equip employees and individuals with the awareness and knowledge to recognise and respond to phishing attempts, particularly as attacks become more sophisticated in nature.
  • Collaboration and information sharing: Partnering with industry peers, law enforcement, risk management experts and cybersecurity organisations enhances collective defence capabilities.
  • Cyber insurance: Investing in cyber insurance is essential to a comprehensive cybersecurity strategy. It provides financial protection and assistance in the event of a cyber incident, helping organisations recover more quickly and mitigate potential losses.

Fortify the future with Gallagher's Cyber Defence Centre

As AI relevance in operations increases, so does the likelihood of threats arising from AI-driven phishing attacks. A specialist risk management and insurance advisor can help you navigate this constantly evolving landscape.

At Gallagher, we are committed to providing the insights and tools organisations need to avoid emerging AI threats. Gallagher's Cyber Defence Centre offers expertise in identifying and mitigating cyber threats and supporting cyber insurance solutions. By providing comprehensive cybersecurity services and risk management strategies, our team tailors solutions that helps protect businesses from the financial impact of cyber incidents. We also equip businesses with the capabilities to anticipate and counteract sophisticated attacks through comprehensive risk assessments, advanced threat detection technologies and ongoing training programs.

The rise of AI-driven phishing and social engineering attacks necessitates businesses to take proactive measures to defend themselves. By acknowledging the challenges in detection and prevention, staying informed and prioritising proactive measures, organisations can mitigate the risks associated with AI-driven phishing and social engineering attacks.

With Gallagher's support, organisations can build a resilient defence against current and emerging threats, ensuring businesses remain secure in the complex digital world.


Disclaimer

The sole purpose of this article is to provide guidance on the issues covered. This article is not intended to give legal advice, and, accordingly, it should not be relied upon. It should not be regarded as a comprehensive statement of the law and/or market practice in this area. We make no claims as to the completeness or accuracy of the information contained herein or in the links which were live at the date of publication. You should not act upon (or should refrain from acting upon) information in this publication without first seeking specific legal and/or specialist advice. Arthur J. Gallagher Insurance Brokers Limited accepts no liability for any inaccuracy, omission or mistake in this publication, nor will we be responsible for any loss which may be suffered as a result of any person relying on the information contained herein.