News BriefsTechnology

DARPA Is Taking On the Deepfake Problem

The United States Defense Advanced Research Projects Agency (DARPA) is launching an initiative to battle the growing threat of deepfakes, which are images or videos doctored by artificial intelligence (AI) that show individuals saying and doing things they never said or did. According to the agency, deepfakes are increasingly used as part of “large-scale, automated disinformation attacks.”

The goal of DARPA’s Semantic Forensics (SemaFor) program is to develop solutions capable of using common sense and logical reasoning in order to recognize deepfakes. DARPA warns that as countries like Russia use increasingly sophisticated deepfakes in influence campaigns, existing tools to identify manipulated media “are quickly becoming insufficient.” In order to improve detection of deepfakes, the agency wants to train algorithms to detect “semantic errors” – such as mismatched earrings on a person – that are common in deepfakes.

Read more: DARPA Is Taking On the Deepfake Problem

OODA Analyst

OODA Analyst

OODA is comprised of a unique team of international experts capable of providing advanced intelligence and analysis, strategy and planning support, risk and threat management, training, decision support, crisis response, and security services to global corporations and governments.