Deepfakes were once treated as internet tricks or political stunts. Today, they pose a serious cyber security threat. Organisations are increasingly seeing deepfakes used in fraud, impersonation and social engineering attacks that are far more convincing than traditional phishing.

For a long time, seeing was believing. Deepfakes challenge that assumption by making it possible to convincingly fake faces, voices and actions at scale.

These attacks don't rely on malicious links or obvious warning signs. They rely on trust, and that is what makes them effective. Deepfake scams are now being grouped with phishing and ransomware as one of the fastest-growing threats to business security.

From phishing emails to AI impersonation

Phishing is not new. Most organisations have been dealing with it for years. What has changed is the delivery.

Generative AI has made it easier for cybercriminals to impersonate real people convincingly. A short voice clip from a podcast, earnings call, or social media video can now be enough to clone someone’s voice. Video impersonation, once expensive and difficult, can now be created using cloud-based tools with minimal skill.

Instead of poorly written emails or suspicious links, attacks now look like:

    • A voice message that sounds like a chief executive asking for an urgent transfer
    • A video call that appears legitimate and familiar
    • A request that fits neatly into a busy workday

This shift is reshaping cyber threats from a technical problem into a human one.

Deepfake facial

What deepfakes are and why they work

Deepfakes use artificial intelligence to create realistic audio, images, or video of people saying or doing things they never actually did. Modern AI models analyse real recordings and learn how a person speaks, moves, and reacts.

Attackers no longer need specialist hardware or large datasets:

    • Seconds of audio can be enough to clone a voice
    • A small number of images can generate a convincing likeness
    • Cloud platforms handle the processing

This accessibility is why deepfake scams have spread quickly and why they are no longer limited to high-value or high-profile targets.

The scale of the problem

Deepfakes are already having a measurable impact on organisations.

In a 2025 analysis of weaponised AI, TechRadar reported that a single financial institution recorded more than 8,000 verified deepfake-enabled fraud attempts, contributing to over US $347 million in global losses in one year. This reflects how quickly AI impersonation has moved from experimentation to large-scale abuse.

At the same time, the volume of synthetic media continues to grow rapidly, making deepfake tools easier to access and harder to control. As availability increases, so does the likelihood of these techniques being used against everyday business processes.

Real incidents driving the risk home

This threat is no longer theoretical.

An investigation by The Guardian revealed how fake investment schemes using deepfake videos of public figures defrauded more than 6,000 victims of approximately US $35 million before being uncovered. The scams worked because the videos appeared credible and trustworthy to those targeted.

Similar techniques are now being used inside organisations. Voice cloning and video impersonation have been reported in corporate environments, where employees believed they were responding to legitimate requests from senior leaders or colleagues. In several cases, this resulted in significant financial transfers before the deception was detected.

These attacks succeed because they feel authentic and because they bypass many traditional security controls.

Deepfake face recognition

Why spotting deepfakes is getting harder

For years, advice on spotting deepfakes focused on looking for obvious mistakes. Poor lip syncing, strange lighting or unnatural facial movement were often enough to raise suspicion. That guidance is becoming less useful as the technology improves.

Early deepfakes often had visible flaws. Modern deepfakes usually don't.

Advances in AI models mean today’s synthetic audio and video are far more consistent and believable. Facial expressions look natural. Voices sound familiar. Video quality no longer degrades in the way it once did. In many cases, there is nothing obviously “wrong” with the content itself.

This creates a real challenge for organisations. People are being asked to make decisions quickly, often under pressure, based on signals they have learned to trust. A recognisable face on a video call. A familiar voice leaving a voicemail. A request that sounds reasonable and urgent.

Some warning signs still exist, particularly in lower-quality attacks. There may be subtle delays between speech and movement, an unnatural tone to the voice, or requests that bypass normal approval processes. But these cues are easy to miss in the moment, especially when the request appears to come from someone senior.

The reality is that human judgement alone is no longer a reliable defence. As deepfakes become more convincing, organisations cannot rely on people to spot deception by sight or sound alone. Detection now depends just as much on strong processes, verification steps and cultural permission to pause and question, even when everything appears legitimate.

What this means for organisations

Deepfakes change the nature of cyber risk. They don't exploit software vulnerabilities first. They exploit people, processes, and assumptions about trust. When a voice sounds right or a face looks familiar, existing controls are often bypassed without hesitation.

This is why deepfakes introduce risks that go well beyond financial loss:

    • Fraud through executive impersonation
    • Data exposure through manipulated access requests
    • Loss of trust in digital communication
    • Legal and compliance challenges where audio or video evidence is involved

When any call, message, or video could be synthetic, organisations need stronger ways to establish and verify trust.

This is something The Missing Link increasingly sees when working with organisations reviewing their security awareness and verification processes. Traditional controls alone are no longer enough.

Phone scam

How to reduce deepfake risk

As deepfakes become harder to spot, relying on people to “notice something off” is no longer enough. Effective defence starts with accepting that sight and sound can be convincingly faked, and that technology alone will not solve the problem.

Reducing deepfake risk requires a combination of clear processes, informed people and the right supporting controls. Organisations that do this well focus less on perfect detection and more on verification, awareness and culture.

Defending against deepfakes requires more than technology alone.

  • Strong verification processes:  Sensitive requests, particularly those involving payments or access changes, should always be confirmed through a separate channel.

  • Awareness that reflects modern threats:  Security awareness training must cover AI impersonation and voice cloning, not just email phishing.

  • Technology as a supporting control:  Detection tools and monitoring can help flag unusual behaviour, but they work best when paired with strong processes.

  • A culture of questioning:  People should feel comfortable challenging unexpected requests, even when they appear to come from leadership.

Taking the next step

Together, these measures help reduce the likelihood of deepfake attacks succeeding. But they also highlight an uncomfortable reality: many organisations are unsure where their biggest human-risk gaps actually are.

Deepfake and impersonation attacks thrive in that uncertainty.

A Workforce Security Assessment provides a practical way to understand how exposed your organisation is to trust-based attacks, including social engineering and impersonation. It helps identify where behaviours, awareness and processes may be putting your business at risk, before those weaknesses are exploited.


If you liked this article, you may also like:

Privileged access in the new world

Red Teaming and the origins of anonymous hacking

What do you do after a data breach

Author

David Bingham

David Bingham is Security Sales Manager for The Missing Link’s Southern Region, where he leads with energy, empathy and a love of complex problem-solving. Known for blending strategic thinking with a passion for people, David creates space for his team—and clients—to thrive. He’s all about building trust, tackling cyber security challenges head-on, and keeping the conversation real (and fun). Whether he’s in a high-rise talking strategy or behind the decks as Melbourne techno DJ Obsessive Behaviour, David brings the same sharp focus, infectious energy and creative spark to everything he does.