Detecting Deep Fakes

Updated: Sep 6, 2019

As research institutions and government agencies attempt to detect deep fakes—manipulated videos containing false information—producers are likely to counter the advances and produce more sophisticated deep fakes. Deep fakes use still images of an individual and apply machine learning algorithms to synthesize facial movements into video form. Currently, video images lacking natural human movements offer clues to detect a deep fake; however, rapidly evolving technology may soon erase these indicators.

  • In June 2018, researchers at the State University of New York applied these movement clues and artificial intelligence (AI) techniques to track eye blinking in videos, achieving a 95 percent detection rate for deep fake videos. However, shortly after this method was identified, the synthesis techniques used to develop deep fakes were altered to eliminate this flaw.

  • Researchers at the University of California at Berkley and Southern California built an AI-system that detects deep fakes using biometric models to determine if “real” facial and head movements have been altered. The developers openly acknowledge that deep fake creators likely will eventually adapt to this form of detection, but have decided not to release the code behind the detection method to slow down discoverability.

Government Efforts


The US Defense Advanced Research Project (DARPA) established the Media Forensics program to develop tools to detect deep fakes. DARPA is bringing researchers together and providing funding to mitigate the risk posed by deep fakes. Experts are examining heat mapping and light levels in altered videos and searching for the absence of physiological actions, such as blinking and breathing, with the goal of improving future detection and mitigation strategies for deep fake videos.

In 2019, DARPA announced the launch of a second program, Semantic Forensics, designed to spot errors automated systems used to manipulate media make when processing large amounts of data, such as mismatched earrings.


Blurred facial areas in poorly made deep fake (Source: YouTube)

It is important to be able to identify poorly made deep fakes, videos that use weak configurations of algorithms or a small number of photographs that reveal easily identifiable flaws. Flickering facial areas, blurred facial or body features, and boxes around the face may be the clues a viewer needs to spot a deep fake.


This is the NTIC’s third product in a series focused on deep fakes at the UNCLASSIFIED level. The first product provided an overview of deep fake technology and the second focused on how it can be used to spread disinformation.


Download PDF

The NTIC is governed by a privacy, civil rights, and civil liberties protection policy to promote conduct that complies with applicable federal, state, and local laws. The NTIC does not seek or retain any information about individuals or organizations solely on the basis of their religious, political or social views or activities; their participation in a particular noncriminal organization or lawful event; or their race, ethnicities, citizenships, places of origin, ages, disabilities, genders, or sexual orientations. No information is gathered or collected by the NTIC in violation of federal or state laws or regulations.