Artificial Intelligence & Machine Learning , Cybercrime , Fraud Management & Cybercrime

Face Off: Researchers Battle AI-Generated Deep Fake Videos

Convincing Face-Swapping Clips Easy to Create With Gaming Laptops and Free Tools
Face Off: Researchers Battle AI-Generated Deep Fake Videos
Symantec's Vijay Thaware, left, and Niranjan Agnihotri discuss defenses for spotting deep-fake videos at Black Hat Europe.

Security researchers are facing off against deep-fake videos over fears that they might be used for nation-state disinformation campaigns or to ruin someone's reputation or social standing.

See Also: Is Your Email Security Keeping Up with Attackers? Protecting your Microsoft 365 Investment

Deep fake is a portmanteau of "deep learning" and "fake" that refers to using advanced imaging technology and machine learning to convincingly superimpose video images.

Of course, so-called artificial intelligence - by which most people really mean machine learning - can be used for good, including building better information security defenses.

But AI or machine learning techniques can also be used for less savory purposes. Notably, the concept of deep fakes became popular when fake pornographic videos of celebrities began circulating online. And free tools have continued to fuel the fervor.

With the easy availability of such technology, however, Vijay Thaware and Niranjan Agnihotri, India-based researchers at Symantec, worried deep fakes might soon be used to attempt to influence elections or destabilize society, or be used by abusers to target individuals.

So as a side project to their day jobs, the pair set out seeing if they could build a tool that would help them to reliably identify deep fakes.

On Thursday, in a Black Hat Europe briefing titled "AI Gone Rogue: Exterminating Deep Fakes Before They Cause Menace," the pair detailed their efforts to build a tool that can be used to spot deep fakes (see: 14 Hot Sessions at Black Hat Europe 2018).

Source: Vijay Thaware and Niranjan Agnihotri's Black Hat Europe presentation, drawn from research published by Alan Zucconi

"What does it take to make a deep fake? All it takes is a gaming laptop, an internet connection, some passion, some patience, of course, and maybe some rudimentary knowledge of neural networks," Thaware said.

Spotting deep fakes, however, proves to be more difficult. But the researchers used 26 deep fake videos that they found online, plus others that they created themselves, to build and refine a tool that's designed to spot deep fakes. It's based on Google's FaceNet tool for recognizing faces.

Nicholas Cage to the Rescue

The researchers trained the tool using clips of Nicholas Cage, Donald Trump and Vladimir Putin, and then used them to look at other known-good and known-fake videos of those individuals. And they claim they've had good results in doing so.

Source: Vijay Thaware and Niranjan Agnihotri

Their goal is to design a small, simple, fast and scalable tool that can be used by YouTube, Facebook and others to review videos when someone tries to upload them to assess whether they appear be fake.

"We believe that there's a huge scope of improvement for our model, and we're constantly working on improving our performance," Niranjan said.

The researchers' work draws on a substantial body of research into deep fakes, which they say continues to grow. They note that these four areas also offer promise for spotting face-swapping deep fakes:

  • Face liveliness detection: Apple Face ID uses this technique to ensure that people aren't trying to unlock an iPhone, via facial recognition, by using a photograph of a registered owner's face.
  • Contextual intelligence mechanisms: "This technique ... will allow us to understand a video semantically by giving us information about what is going on in the background and giving us more metadata," Niranjan said. In other words, attempts to face-swap or otherwise alter videos might leave tell-tale signs, provided researchers know what they're looking for.
  • Texture investigation: "When deep fake videos are created using image manipulation, or in our case, face swapping, the forging leaves behind certain patterns," Niranjan said.
  • User interaction: What does "normal" look like? "We have an intention to create a data set which involves videos which contain normal head rotation and smiling and the normal actions of humans," he said. "It will be able to identify if the actions in a video are actual or not actual."

Don't Blink

Analyzing eye blinking in a video could also help reveal fakes.

Previous research into combatting deep fakes, cited by the pair, has found the following average blink rates:

  • Resting: 17 blinks/minute
  • Conversing: 26 blinks/minute
  • Reading: 4.5 blinks/minute

The researchers say most face-swapping videos eschew eye blinking. "Generally the actors in the deep fake videos are not seen blinking," Thaware said, because otherwise their blinking patterns tend to look weird.

Beyond Face-Swapping

From left: Symantec's Vijay Thaware and Niranjan Agnihotri

The two researchers stress that their tool can only be used to help detect face-swapping deep fakes, and that spotting fakes that are less than one minute in length, or which don't clearly show the speaker's face, is difficult.

They note that the battle may not just be against face-swapping deep fakes. Powerful systems might eventually get used, Hollywood-style, to create or derive a model of someone's face that could be used to generate fully artificial yet convincingly real-looking videos.

"As artificial intelligence matures everyday, it gets better at producing genuinely fake content," Thaware said.

As an example of a convincing video that doesn't involve face swapping, he referenced AI-powered work by actor Jordan Peele, who earlier this year produced a video clip (below) in which former U.S. President Barack Obama appears to be spouting gibberish. "This is how the future might look, and it's very scary," Thaware said.

Potential for New Legislation

In the same way as revenge porn is now becoming the subject of new laws that try to give victims a way to combat these types of attacks, Thaware said that legal protection against deep fakes might be required. "Currently there are no laws or legal protection for individuals who have been victimized by deep fake videos," he said.

Niranjan added: "Everything that gets invented for science can be abused as well."

"Imagine if a deep-fake video is created of a politician speaking utter rubbish or passing obscene comments just before voting day, and then this video is circulated and uploaded to social media," Thaware said. Another potential threat might be a disinformation campaign that spread massive quantities of deep fake videos to raise questions about whether any online videos were real.

Potential Blockchain Potential

One conference attendee asked: " Have you ever thought about securing video hashes in a blockchain?" In other words: What if legitimate, verified video creators stored a hash of a video in the blockchain so viewers could see if might have been tampered with?

So far, this approach apparently remains unproven. "There is a discussion going on, on the internet, about using blockchain to solve this, but I haven't come across any research paper that looks into that," Niranjan said.


About the Author

Mathew J. Schwartz

Mathew J. Schwartz

Executive Editor, DataBreachToday & Europe, ISMG

Schwartz is an award-winning journalist with two decades of experience in magazines, newspapers and electronic media. He has covered the information security and privacy sector throughout his career. Before joining Information Security Media Group in 2014, where he now serves as the executive editor, DataBreachToday and for European news coverage, Schwartz was the information security beat reporter for InformationWeek and a frequent contributor to DarkReading, among other publications. He lives in Scotland.




Around the Network

Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing bankinfosecurity.asia, you agree to our use of cookies.