Cybersecurity faces an emerging threat generally known as deepfakes,
Malicious uses of AI generated synthetic media, the most powerful
cyber-weapon in history, is just around the corner. The cybersecurity
industry has only a short time to get ahead of it before it challenges
public trust in reality. In response to this scary thought, Hany Farid,
the "father" of digital image forensics, told The Washington Post,
"Increasingly accessible tools for creating convincing fake videos
are a deadly virus. However, the number of people working on the
video-synthesis side, as opposed to the detector side, is 100 to 1."
Nation-states and Hollywood VFX artists have been able to manipulate media since the very beginning of media, but today anyone can download deepfake software and create convincing fake videos in their spare time because the cost of producing these new forms of synthetic media has decreased significantly. It will soon be as easy to create a fake video as it is to add an Instagram filter. Celebrities and politicians will be the primary targets of the weaponization of this deepfake technology. Deepfakes swap celebrities' faces into porn videos and put words in politicians' mouths, but they could do a lot worse. It may only be a matter of time before the general public is at risk too.
Deepfakes are such a threat to U.S. that the Defense Department is launching a project to repel "large-scale, automated disinformation attacks". The Pentagon's Joint Artificial Intelligence Center recently declared that deepfakes pose a very real threat to national security. Rep. Adam B. Schiff (D-Calif.), who chairs the U.S. House Intelligence Committee, said, "I don't think we're well prepared at all. And I don't think the public is aware of what's coming." On the other hand, this month Texas became the first state to criminalize deepfakes.
From a cybersecurity perspective, we have to address all known forgery methods with the highest accuracy possible and develop generalizable artifact-detection methods for “zero-day deepfakes”. However, the science of detecting deepfakes is, effectively, an arms race. Those who develop deepfake technology are acutely aware of its tremendous power for abuse. Identifying unknown tampered content is technically challenging, and that is why our research always has to keep up or even get ahead. Given the urgency of deepfake detection, we should focus on the critical need to develop solid detection algorithms before we are in the eye-of-the-storm.
Our deepfake detection technology was designed to detect deepfake videos or, simply, any fake content in the areas of visual and audio communication. After a year of research in synthetic media detection, we have built a multi-layered neural engine to spot deepfake content. When a platform integrates our technology, it provides an automatic warning if you are watching, reading, or hearing fake content. This will enable governments, social media platforms, instant messaging apps, news and media to detect AI-made forgery in digital content before it can cause social harm.