Deepfakes Explained: The AI That's Making Fake Videos Too Convincing (2024)

Seeing is believing. Well, at least that was the case before we realized that people could easily and convincingly doctor videos to propel hoaxes and rewrite history. While we've found ways to debunk most hoax images, there is one technological development that continues to gain pace, making it more difficult to know how to tell what's real and what's fake.

Deepfakes change everything we thought possible in terms of doctored videos. Here's what to know about what deepfakes are, as well as the risks this technology brings…

What Is a Deepfake?

The term deepfakes comes from a combination of the words "deep learning" and "fakes". This is because artificial intelligence software trained in image and video synthesis creates these videos. So the definition of a deepfake is a fake video or image generated using AI to replace the face and voice of the figure portrayed. This AI can superimpose the face of one subject (the source) onto a video of another (the target). Meanwhile, more advanced forms of deepfake technology can synthesize a completely new model of a person using the source's facial gestures and images or video of the subject they wish to impersonate.

The technology can make facial models based on limited visual data, such as one image. However, the more data the AI has to work off of the more realistic the result is.

This is why politicians and celebrities are such easy targets for deepfakes, since there is so much visual data available online that the software can use. Since deepfake software is available on open-source platforms, people on the internet are continually refining and building upon the work of others.

Where Do Deepfakes Come From?

The technology behind deepfakes was developed for a multitude of purposes. Much like Photoshop, the software has professional, entertainment, and hobbyist uses. And just like Photoshop, despite the original creators having no malicious intentions in creating the software, this hasn't stopped people from using it for malicious purposes.

Face-swapping technology was initially mainly used in the movie industry. One of the most famous instances is in the 2016 film Rogue One: A Star Wars Story. In the movie, filmmakers used face-swapping and video synthesis technology to recreate the character Grand Moff Tarkin.

A younger version of Princess Leia was also created in the film. In both instances, models of the original actors' faces were superimposed onto stand-in actors.

Apps like Snapchat also use face-swapping technology to create fun filters for users. The developers behind these apps continually refine face detection and tracking to apply these filters more effectively.

Others have developed video synthesis tools to create holograms for educational purposes. For example, one project developed video and facial synthesis software so that the testimony of Holocaust survivors could be presented as interactive holograms at a museum.

Deepfake Technology Advances Quickly

Machine learning makes life easier, but in this case, it makes fakery significantly easier. Just like AI-generated art is changing the art industry, deepfakes have changed the nature of online video. But some of these changes are negative.

Firstly, deepfake software is widely and freely available. FakeApp, for example, is a popular choice for creating deepfakes. You don't need advanced skills to apply a face swap, the software will do it for you.

But since AI and deep learning help create deepfakes, the tech also improves and becomes more convincing at an alarming rate. Sometimes these edits are not visible to the naked eye, making it difficult to tell what's real and what isn't. There are some telltale signs to help you spot a deepfake, such as edges clipping or distortion of faces at certain angles.

In a world rife with fake news, convincing deepfakes could prove to be a chaotic force against what we believe to be true. The rise of deepfakes is also taking place at a time when AI voice synthesis and AI voice generators are advancing too.

Not only can AI generate fake videos, but it can also generate voice models for people. This means that you wouldn't need an impersonator to make it sound like a politician is making an outrageous statement. You can train AI to mimic their voice instead.

The Consequences of Deepfakes

While you can use deepfakes to make entertaining videos, many people also use deepfakes for malicious purposes. People have used FakeApp to create fake videos of celebrities in compromising scenarios.

Furthermore, in most countries, no laws deal with this kind of content yet, making it difficult to regulate. While we're still some way away from the dystopia ruled by misinformation and false video evidence that we see in movies like The Running Man, we are already all too familiar with the effects of fake news.

The consequences of deepfakes used for political purposes are two-fold. Firstly, it makes fake news much easier to spread. Videos are more likely than text or images to convince people that something fictitious actually happened. People already believe headlines from fake websites with no evidence.

One example of deepfakes used to mislead people is the 2022 deepfake that rendered a video of Ukrainian President Volodymyr Zelenskyy telling troops to surrender. This was proven fake, but the video still circulated on social media.

On the other hand, deepfakes could also embolden politicians when dodging accountability. They could always easily claim that an audio or video recording is actually a deepfake.

How Are We Fighting Deepfakes?

It can be difficult to protect yourself from deepfake videos. But there are ways that companies are working to make deepfakes easier to spot. There are a variety of tools aimed at fighting malicious fake videos. Many of these tools use AI to detect tampering in videos.

The AI Foundation created a browser plugin called Reality Defender to help detect deepfake content online. Another plugin, SurfSafe, also performs similar checks. Both these tools aim to help internet users discern fact from fiction.

Reputable fact-checking websites have also expanded to calling out doctored videos. Meanwhile, Microsoft has also launched technology to help combat deepfake videos.

Even the US Department of Defense invested in software to detect deepfakes. After all, what would happen if a convincing video of a world leader appeared online, declaring war or a missile launch against another country? Governments need tools to quickly verify the legitimacy of a video.

Machine Learning's Unintended Consequences

There's no doubt that AI technology and deep machine learning improve our lives in many ways. But the technology also has unintended consequences. It's difficult to predict how people may use certain technology for malicious purposes, and deepfakes are one example of this.

Deepfakes Explained: The AI That's Making Fake Videos Too Convincing (2024)

References

Top Articles
Latest Posts
Article information

Author: Rev. Porsche Oberbrunner

Last Updated:

Views: 5938

Rating: 4.2 / 5 (53 voted)

Reviews: 92% of readers found this page helpful

Author information

Name: Rev. Porsche Oberbrunner

Birthday: 1994-06-25

Address: Suite 153 582 Lubowitz Walks, Port Alfredoborough, IN 72879-2838

Phone: +128413562823324

Job: IT Strategist

Hobby: Video gaming, Basketball, Web surfing, Book restoration, Jogging, Shooting, Fishing

Introduction: My name is Rev. Porsche Oberbrunner, I am a zany, graceful, talented, witty, determined, shiny, enchanting person who loves writing and wants to share my knowledge and understanding with you.