
Advances in artificial intelligence (AI) and deep-learning technologies are making synthetic media — deepfakes — more convincing, making it hard for us to tell what is genuine. Deepfakes take the form of images, text, audio or videos altered or generated to appear that people did or said something that they never actually did or said. The technology can be combined with AI and deep-learning techniques to manipulate real media to create synthetic media.
Deepfake technology has been available since at least 2017. Since then we've seen rapid improvement in the technical quality of deepfakes, and the means to access and create them has become easier. In 2023 popular generative AI platforms such as Midjourney 5.1 and OpenAI's DALL-E 2 emerged as widely available tools for threat actors to conduct deepfake campaigns.1
How cyber criminals exploit deepfake technology
As this technology has evolved, so too have the criminal tactics to exploit it. Threat actors are using it to create synthetic media that can be used in a variety of destructive ways, creating a new and frightening reality in the 2023 cyber threat landscape.
Key deepfake technologies include:
- face replacement — also called face swap — is a deepfake strategy that copies a facial image and places it on another body
- face generation creates facial images that don't exist in reality
- speech synthesis uses AI to create realistic human speech
- generative adversarial networks (GANs) use deep learning methods to learn patterns of data — such as what may be contained in real videos — and uses it to create new content.
Common types of deepfakes and the motivations behind them
Various deepfake creation communities are online, connecting deepfake experts with those who want to create synthetic media. Here are some common deepfakes and how threat actors use them.
Pornographic deepfakes
Deepfake pornography accounts for the vast majority of deepfake videos. Victims are typically women from a range of professions. Non-consensual deepfake pornography can be shared indefinitely.
Political deepfakes
An individual or group with a particular political ideology could seek to disrupt an election by using deepfake video or audio to attack an opposing party
Political leaders around the globe have already been targeted, and the threat goes beyond elections. Impersonations of political leaders and high-ranking military personnel could lead to geo-political conflict.
Deepfakes for financial crimes
For several years hackers have convinced victims to transfer funds to false accounts, typically by using emails impersonating CEOs and other business leaders.
We now have evidence that hackers have progressed to using synthetic audio to execute the same crime. Criminals could expand on this method by impersonating business leaders to manipulate stock prices by having a CEO announcing false information.
Deepfakes for extortion and harassment
Individuals with grudges could attack others with deepfake technology in both personal and business environments. The outcomes of divorce proceedings, job applications and vendor bidding competitions could all be affected.
What can be done about deepfakes?
No single person, entity or technology solution can control the creation and distribution of digital content on an end-to-end basis. Its lifecycle is facilitated by a combination of people, hardware and software and it lives in cyberspace — designed for easily and quickly sharing information, which includes deepfake videos and audio. Once content is shared on the internet, it can be extremely difficult, if not impossible, to remove.
The Federal Government's eSafety website notes that while deepfake technology is advancing rapidly, some signs can help identify fake photos and videos.2
These include:
- blurring, cropped effects or pixilation (small box-like shapes), particularly around the mouth, eyes and neck
- skin inconsistency or discoloration
- inconsistency across a video, such as glitches, sections of lower quality and changes in the lighting or background
- badly synced sound
- irregular blinking or movement that seems unnatural or irregular
- gaps in the storyline or speech.
If in doubt, question the context. Ask yourself if it's what you'd expect that person to say or do, in that place, at that time.
Transferring the deepfake risk
The cyber insurance industry is evolving as new cyber threats surface. The most comprehensive policies pay for data breach crisis management, including lawyers, IT forensics investigators, credit monitoring services and public relations experts. They may also reimburse their clients for defending and settling lawsuits.
However, many policies require specific conditions to trigger coverage, and damage caused by impersonation in a deepfake video or audio may not be covered. In view of the latest deepfake threats, there are three potential losses to consider when negotiating insurance cover.
- Lost funds. A deepfake social engineering scam resulting in unauthorised funds transfer can lead to immediate and significant financial harm.
- Business interruption and other costs. Your focus on addressing a deepfake impersonation and attempting to manage the crisis could lead to financial loss and unexpected costs.
- Reputational harm. Impersonation may lead to both near-term and long-term reputational harm to your brand and ultimately impact your bottom line.
Read your cyber insurance policy carefully, explore other policies and consult your broker for advice on managing the deepfake threat. In addition to cyber insurance protection, Gallagher offers expertise, advice and resources for building business resilience to withstand cyber security incidents.