08 Oct 2019

Future Proof Conference Announced

OODA is pleased to announce that our Future Proof conference will be held on March 19, 2020 in Tysons Corner, Virginia.

The Future Proof conference brings together the hackers, thinkers, strategists, disruptors, leaders, technologists, and creators with one foot in the future to discuss the most pressing issues of the day and provide insight into the ways technology is evolving.  Future Proof is not just about understanding the future, but developing the resiliency to thrive and survive in an age of exponential disruption. 

Read More
01 Oct 2019

China’s new 500-megapixel ‘super camera’ can instantly recognize you in a crowd

China is bound to intensify its already staggering facial recognition efforts now that researchers from two Chinese universities have developed a 500 megapixel facial recognition camera that can capture “thousands of faces at a stadium in perfect detail and generate their facial data for the cloud while locating a particular

Read More
27 Sep 2019

Google’s war on deepfakes: As election looms, it shares ton of AI-faked videos

In an effort to boost research and development in the context of deepfake detection, Google has shared a database containing 3,000 deepfake videos with the new FaceForensics benchmark, a research project by researchers at the Technical University of Munich and the University Federico II of Naples. Deepfakes are audio or

Read More
18 Sep 2019

Guidance on Federal AI Regulations Coming Shortly, Federal CTO Says

US Federal Chief Technology Officer Michael Kratsios on Tuesday said the government will soon publish a first set of regulations governing the development of artificial intelligence (AI) technologies. According to Kratsios, the document will have legal force and will “set the tone globally on the way that we can be

Read More
09 Sep 2019

Facebook, Microsoft Challenge Industry to Detect, Prevent ‘Deepfakes’

Facebook, Microsoft and various universities have launched a joint contest to boost efforts to fight the spread of deepfakes, which are audio or visual content doctored by artificial intelligence (AI). Deepfakes allow threat actors to spread disinformation and influence public opinion by making it seem like influential individuals including government,

Read More
04 Sep 2019

A Harbinger Of Our Future: Reports Indicade Voice Deepfake Was Used To Scam A CEO Out Of $243,000

Adversaries in search of financial gain will innovate. We all need to accept that observation as a fact of life, meaning we should all stay agile and prepare for surprise. When it comes to deep fake video and audio, what is surprising is not that adversaries are using this technology,

Read More
03 Sep 2019

China’s Red-Hot Face-Swapping App Provokes Privacy Concern

Once again, a popular mobile app for generating deepfakes, i.e. images or videos doctored by artificial intelligence (AI), has prompted a major privacy backlash. Since last weekend, Chinese face-swap app Zao has taken China’s iOS store by storm. The app can generate deepfakes of scenes from popular movies and TV-shows

Read More
02 Sep 2019

US Unleashes Military to Fight Fake News, Disinformation

The US Defense Advanced Research Projects Agency (DARPA) aims to develop software that can spot fake news stories as well as fake audio, images and video (deepfakes) in order to combat “large-scale, automated disinformation attacks.” Over a period of 48 months the DARPA initiative, called the Semantic Forensics (SemaFor) program,

Read More
30 Aug 2019

The Implementation Of Facial Recognition Can Be Risky. Here’s Why..

While facial recognition holds great promise for the security industry as a more secure authentication mechanism than passwords or PINS, the technology does come with various risks according to Allerin CEO Naveen Joshi. The risks stem from the fact that facial recognition algorithms, like other technologies leveraging artificial intelligence (AI),

Read More
22 Aug 2019

Amazon, Microsoft, May be Putting World at Risk of Killer AI, Says Report

A new report by Dutch NGO Pax suggests that Microsoft, Amazon and various other major tech companies may be putting the world at risk by developing or planning to develop autonomous weapons. The study scrutinized 50 major tech companies to see if they were involved in military artificial intelligence (AI)

Read More