CyberNews Briefs

Facebook, Microsoft Challenge Industry to Detect, Prevent ‘Deepfakes’

Facebook, Microsoft and various universities have launched a joint contest to boost efforts to fight the spread of deepfakes, which are audio or visual content doctored by artificial intelligence (AI). Deepfakes allow threat actors to spread disinformation and influence public opinion by making it seem like influential individuals including government, corporate and military leaders, candidates in democratic elections, scientists and celebrities, said or did things they didn’t actually say or do.

Facebook CTO Mike Schroepfer announced that the contest, called the Deepfake Detection Challenge (DFDC), aims to “catalyze more research and development” in the area of deepfakes, and to “ensure that there are better open-source tools to detect deepfakes.” Participants can win grants and awards for developing technology to prevent and detect deepfake videos.

Deepfakes are a growing threat to governments, corporations and citizens alike. In the first known example of a successful deefake voice scam, a UK CEO was tricked into transferring $243,000 to threat actors just last week.

Read more: Facebook, Microsoft Challenge Industry to Detect, Prevent ‘Deepfakes’

OODA Analyst

OODA Analyst

OODA is comprised of a unique team of international experts capable of providing advanced intelligence and analysis, strategy and planning support, risk and threat management, training, decision support, crisis response, and security services to global corporations and governments.