News BriefsTechnology

Deepfake Detection Poses Problematic Technology Race

Researchers are increasingly investigating the origins and distribution of deepfakes, fake content that is generated using deep neural networks to appear real. With the US presidential election upcoming, academic and industry researchers are seeking to prevent the spread of misinformation through deepfakes. Researchers have found success in focusing on unnatural behavior such as blinking of eyes. However, the neural networks used to create these deepfakes are doctored using a variety of techniques, including testing the content against programs intended to detect manipulated media.

The feedback loop used to create the doctored videos is strikingly similar to the fully undetectable services that allow for malware to be automatically scrambled, avoiding signature-based detection technology. Although detecting this unnatural behavior aids researchers in uncovering misinformation campaigns, the perpetrators easily have the ability to include new detection programs into the feedback loop, ultimately rendering this tactic unuseful.

Read More: Deepfake Detection Poses Problematic Technology Race

OODA Analyst

OODA Analyst

OODA is comprised of a unique team of international experts capable of providing advanced intelligence and analysis, strategy and planning support, risk and threat management, training, decision support, crisis response, and security services to global corporations and governments.