With a more volatile geopolitical risk landscape emerging at home and abroad, firms must prepare for multiple potential sources of disruption in 2025.
Getting your Trinity Audio player ready...

Authors: Laura Hawkes Marlon Pinto

null

In 2025, old threats and emerging issues will converge to create a complex security environment.

The frontline for major conflicts is becoming increasingly difficult to define. Alongside battles fought using conventional methods, hybrid warfare consists of irregular tactics and state-sponsored cyber-attacks.

While businesses may not be the direct target, such conflicts can spill over into everyday life. In addition to soldiers and tanks, security threats are manifesting via hackers, disinformation campaigns and even sabotage are key security issues.

We saw during 2024, Russian-backed cyber groups continuing to launch attacks against Western critical infrastructure. Additionally, the Israeli-Palestinian crisis has catalysed politically motivated sabotage and cyber-attacks against businesses seen as taking a partisan stance.

In his October 2024 annual briefing, MI5 Director General Ken McCallum warned that Russia’s GRU was “on a sustained mission to generate mayhem on British and European streets”.

The role of state actors in trying to destabilise security will continue throughout the next year. The UK is ramping up its defence mechanisms and coordinated partnerships to counter these threats effectively. Organisations, meanwhile, must enhance their resilience and align with expert guidance to navigate this multifaceted landscape effectively.

The evolving threat landscape necessitates that companies learn to mitigate risks associated with these rapid changes, while also managing the risks posed by traditional physical security threats — such as sabotage, arson and civil unrest, for example.

Rise in extremism amplified by AI

Political polarisation continues to challenge societal stability in the UK and beyond, exacerbated during 2024 by the influence of AI-fuelled disinformation campaigns on social media.

According to one survey, 85% of Britons say that British society is more divided these days, with particular concern over the rise of religious and right-wing extremism1.

The growth in extremist activity has been a defining feature of the past 12 months, as AI has provided new and powerful tools for extremist individuals and groups, enabling them to expand their reach, disseminate propaganda, and carry out attacks with unprecedented speed and scale.
Laura Hawkes, Head of Intelligence - Another Day

Technological threats amplify risks

Emerging technologies are presenting new challenges for security services. The potential to manufacture weapons through 3D printing or utilise drones for surveillance and attacks highlights the adaptive nature of modern threats.

Advancements in AI have amplified the capabilities and risks of violent non-state actors, enabling extremists to use tools like Generative AI for multilingual propaganda, disinformation, and attacks, significantly expanding their reach. The rapid pace of technological innovation, outpacing regulation, further complicates efforts to maintain safety and security.

Civil unrest and activism

During 2024, civil unrest became a more present concern for UK businesses, fuelled by economic hardship and sociopolitical movements. The spread of online misinformation during August 2024 demonstrated how quickly false information spread on social media could spark a wave of protest movements and riots across the country.

For businesses caught in the crossfire, the direct and indirect impact of such disorder can be substantial. In the wake of riots across England and Northern Ireland in summer 2024, footfall on UK high streets declined by an average 4.8% as shoppers stayed at home2.

Looking ahead to 2025, triggers for civil unrest could include populist nationalism, social inequality, radicalism and disinformation.
Laura Hawkes, Head of Intelligence - Another Day

Strikes, protests and activism campaigns targeting specific industries are a concern and while mass protests have largely been peaceful, there has been an uptick in aggressive incidents targeting certain institutions.

A key challenge is that civil unrest and other forms of political violence are dynamic, human perils, and events can be difficult to anticipate. As such, demand for intelligence is growing. Companies should practice how they would respond to incidents and maintain up-to-date crisis management plans to keep people and property safe.

While property-all-risks policies may include cover for strikes, riots and civil commotion, broader balance sheet protection is available from the standalone terrorism and political violence market.

Crime: Fraudsters up the ante

Currently, fraud costs the UK more than £200 billion each year3. In 2024 we’ve seen an increase in the financial sums lost due to fraud in large corporations, often attributed to the current economic climate or stemming from inadequate due diligence during mergers and acquisitions.

While technology and digitisation have enabled firms to improve their fraud prevention controls, these can be circumvented by executives with privileged access to payment systems. Meanwhile, external threats are becoming harder to detect.

Among the trends to watch out for in 2025 are the growing sophistication of AI-leveraged social engineering attacks and the continued rise of cargo theft and supply chain fraud.

Deepfakes present a growing challenge

Research tells us that whilst two in five people say they have seen at least one deepfake in the last six months, only one in ten are confident in their ability to spot them4. The rise of convincing AI-generated audio and video is likely to post an increasing challenge for organisations and senior executives moving into 2025.

AI is a hot topic; everyone's talking about it but companies need to be aware of how AI can be used to identify and exploit vulnerabilities
Marlon Pinto, Director of Investigations - Another Day

Fraudsters can use deepfake videos or audio recordings to impersonate executives, either to bypass biometric verifications or convince other staff members to make what they think are genuine payments authorised by more senior individuals.

While we've seen AI used in general public scams, such as voice imitation, to trick individuals into making payments, we haven't yet seen this technology widely used in corporate crime, but it’s only a matter of time.

It's crucial for companies to understand that AI can be exploited against them. Most discussions focus on how AI can grow the business, make it profitable and streamline work, but as we look forward, we also need to consider how AI can be used against us.

Author Information

Laura Hawkes

Laura Hawkes

Head of Intelligence, AnotherDay

Marlon Pinto

Marlon Pinto

Director of Investigations, AnotherDay


Sources

1 Shrimpton, Hannah, and Gideon Skinner. 85% Say Britain Is Divided as Concern About Extremism Rises, Ipsos, 22 Aug 2024.
2 Kollewe, Julia. Riots Contribute to 4.8% Drop in Footfall on UK High Streets, The Guardian, 9 Aug 2024.
3 Lawson, Annie Fraud costs the UK economy billions each year. Financial Accountant, 23 Oct 2024.
4 A deep dive into deepfakes that demean, defraud and disinform, Ofcom, 23 July. 2024.


Disclaimer

The sole purpose of this article is to provide guidance on the issues covered. This article is not intended to give legal advice, and, accordingly, it should not be relied upon. It should not be regarded as a comprehensive statement of the law and/or market practice in this area. We make no claims as to the completeness or accuracy of the information contained herein or in the links which were live at the date of publication. You should not act upon (or should refrain from acting upon) information in this publication without first seeking specific legal and/or specialist advice. Arthur J. Gallagher Insurance Brokers Limited accepts no liability for any inaccuracy, omission or mistake in this publication, nor will we be responsible for any loss which may be suffered as a result of any person relying on the information contained herein.