Impersonation is a problem for marketplaces and financial institutions alike. It makes it difficult for people to trust that money and sensitive information isn’t ending up where they don’t intend it to go. And now a new type of media, powered by artificial intelligence, is making it even more challenging to tell if someone is really who they claim to be. It’s called “deepfake.”
So what is a deepfake? How are deepfakes made? And what potential abuses of the technology are governments, businesses, and other organizations worried about? We’ll discuss the answers to these questions below.
A deepfake is a piece of media created or manipulated by artificial intelligence to make a person depicted by the media seem as if they are someone else. It can involve manipulating an image, an audio track, a video, or any combination of those. “Deepfake” is a mash-up of “deep learning” and “fake.”
Tools for editing images, sounds, or videos aren’t necessarily new. What’s new about deepfake, though – and what gives deepfake its meaning – is the use of machine learning techniques to make or modify media precisely enough that it’s incredibly tough to distinguish from something legitimate.
Current cutting-edge deepfake AI is powered by two machine learning models working against each other. The “generator” algorithm is trained using sample imagery, audio, and/or video to create a new piece of media – or manipulate an existing one – that collectively resembles the samples as closely as possible.
The “discriminator” algorithm, meanwhile, is trained to recognize distinctive features in the samples, and point out where the “generator” misses them so it can go back and correct those inconsistencies.
This is known as a generative adversarial network, or GAN. Basically, it works like this:
This process allows the generator to eventually create or manipulate media so accurately that neither artificial intelligence nor human intelligence can easily tell the difference between a deepfake and the genuine media it’s based on.
Deepfake technology has garnered plenty of controversy for its ability to facilitate potentially abusive – and even illegal – activities. Here are some deepfake use cases that governments, businesses, and other organizations are on guard against.
Given that fraud inherently involves deceiving others, deepfakes are potent new tools for fraudsters. They allow criminals to manipulate pictures, audio tracks, and even videos to present themselves as other people convincingly. This opens up possibilities for all sorts of fraudulent and otherwise unlawful activities.
For example, if a criminal steals a person’s sensitive information and gets good enough samples of what they look and sound like, they can use deepfakes to create phony ID credentials that are very difficult to determine are counterfeit.
This is especially worrisome, because deepfakes can modify videos and audio tracks – not just static pictures – to make a person in them seem as if they are someone else. So deepfakes may allow criminals to fool supposedly higher-security forms of identity verification, such as biometrics and liveness detection.
A deepfake video or voice deepfake can also be used as part of a social engineering scam. A criminal can convincingly appear and/or sound like an authority figure or trusted individual, then instruct the victim to give them money or other sensitive information. This actually happened to a British energy company’s CEO in 2019, costing the business over $243,000.
A popular use of deepfakes is to deepfake a celebrity – to make a regular person look and/or sound as if they’re someone famous. Sometimes this is done as a playful art form, like this TikTok user who creates deepfakes of Tom Cruise. Other times, it can be used maliciously.
Either way, a celebrity deepfake can be problematic for a number of reasons. It can leverage the celebrity’s social influence to build support for causes that the celebrity themselves does not actually endorse.
Similarly, it could use the celebrity’s cultural authority to propagate hoaxes or other misinformation. It could also simply be used to damage the celebrity’s reputation by making it seem as if they are doing or saying things out of character.
An example of a widespread malicious use of celebrity deepfakes is when the deepfake creator manipulates explicit images, audio, and/or video to make it look like a celebrity is engaging in a sexual act. This can cause embarrassment and reputational damage, and is considered a crime in many places.
When it comes to how to make a deepfake, an important distinction needs to be made between training an artificial intelligence network on how to make deepfakes, and using a deepfake maker to actually create a deepfake.
Most deepfake apps are self-contained software programs that don’t need a lot of sample data to create or modify a piece of media. So a deepfake generator can create a deepfake in under 30 seconds. On the other hand, training a machine learning model (or pair of models, in the case of GANs) to be able to make deepfakes so quickly and with such little input is much more complicated. It requires a huge number of samples of a person’s likeness and/or voice – hours upon hours of footage, in the case of video – to be able to capture essential details.
For example, what are the fundamental features on a person’s face? How does someone look from multiple angles, or under certain lighting conditions? What is the cadence, tone, or accent of a person’s voice?
Artificial intelligence algorithms used for deepfakes need to learn how to take all of these facets into account in order to accurately replicate what a given person looks and/or sounds like.
And as most current deepfakes aren’t made using GANs, deepfake app makers often need to manually adjust the underlying algorithms. This helps them to avoid creating deepfakes with obvious signs that they’re forgeries.
The technology behind deepfakes is projected to get even more advanced as time goes on. And unfortunately, that will make it even easier for fraudsters to commit crimes behind synthetic or stolen identities – even when KYC checks involving images, audio, and/or video are performed.
This highlights why it’s more important than ever to have a multi-pronged anti-fraud strategy. The fraud team needs to be able to monitor for many different types of suspicious signs that abuse or financial crime could be happening at an FI or marketplace.
Try a demo of Unit21’s platform today to see how we can give you comprehensive coverage.