What used to be an internet oddity has developed into a widely destructive political and social force: Deepfakes. They have become one of the most dangerous types of phishing scams these days.

Back in 2018, already 83% of businesses experienced a phishing attack. And with the rise of Artificial Intelligence (AI), the tactics employed by cyber criminals have evolved.

Deepfakes use AI to create realistic-looking photos and videos of people saying and doing things that they did not actually say or do. Most of this footage is done on high-end desktops with powerful graphics cards or with computing power in the cloud.

It all started with a PhD research project

To create a deepfake, the hacker must have access to video or voice recordings of the individual they are trying to impersonate. By processing these recordings through an AI algorithm, they achieve a convincing vocal imitation.

The core technology that makes deepfakes possible is a branch of deep learning, known as generative adversarial networks (GANs), found by a PhD student in 2014 at the University of Montreal, one of the world's leading AI research institutes. GANs give neural networks the power not just to perceive, but to create. Three years later, a combination of the phrases "deep learning" and "fake", deepfakes, emerged on the Internet.

A GAN puts two artificial intelligence algorithms in competition with each other. The first algorithm, aka the generator, is fed random noise and turns it into an image. This synthetic image is then added to a stream of real images that are fed into the second algorithm, aka the discriminator. By repeating this process numerous times, including feedback loops, both 'opponents' will improve their performance. Eventually, the generator will start producing utterly realistic faces of completely nonexistent people.

Another way to make deepfakes are face-swap videos. Hackers run thousands of face shots of the two people they want to swap through an AI algorithm called an encoder. The encoder finds and learns similarities between the two faces, and reduces them to their shared common features, compressing the images in the process. Next, a decoder will recover the faces from the compressed images. Because the faces are different, the hacker will train one decoder to recover the first person's face, and another decoder to recover the second person's face. Then, he will feed encoded images into the "wrong" decoder. For example, a compressed image of person A's face is fed into the decoder trained on person B. The decoder then reconstructs the face of person B with the expressions and orientation of face A.

How to spot deepfakes?

The amount of deepfake content online is growing rapidly. Although it takes some expertise, there are plenty of tools available to help people create deepfakes. What's even worse, there are companies who have made a business out of it!

At the beginning of 2019, there were 7,964 deepfake videos online, according to a report from startup Deeptrace; just nine months later, that figure had jumped to 14,678 with 96% being pornographic.

Also in 2019, the first case of AI-based voice fraud appeared. A UK-based subsidiary of a German energy company paid nearly US$243,000 into a Hungarian bank account, allegedly a supplier, after being phoned by a fraudster who mimicked the German CEO's voice.

It gets harder as the technology improves; however, signs of poor-quality deepfakes may be:

  • Bad lip synching
  • A patchy skin tone
  • Flickering around the edges of transposed faces
  • Badly rendered hair, especially on the fringe
  • Badly rendered jewellery and teeth
  • Strange lighting effects (inconsistent illumination and reflections on the iris).

From pornography to a widely destructive political force

Because of the technology's widespread accessibility, and the fact that literally, anyone with a computer and internet access can create such faked footage, the use of deepfakes has begun to spread from the dark corners of the web to the business world, society, and the political arena.

In April 2020, a political group in Belgium released a deepfake video of the Belgian prime minister giving a speech that linked the COVID-19 outbreak to environmental damage and called for drastic action on climate change.

The Brookings Institution summed up the range of political and social dangers that deepfakes pose: "distorting democratic discourse; manipulating elections; eroding trust in institutions; weakening journalism; exacerbating social divisions; undermining public safety; and inflicting hard-to-repair damage on the reputation of prominent individuals, including elected officials and candidates for office."

Deepfakes could also mean trouble for the courts and pose a personal security risk by mimicking biometric data and thus trick systems that rely on the face, voice, vein or gait recognition.

In summary, it does not require a lot to imagine the worst-case scenario: a society where no one trusts anyone anymore.


If you liked this article, you may also like:

Privileged access in the new world

Red Teaming and the origins of anonymous hacking

What do you do after a data breach