12 Jul 2019

69% of organizations believe they can’t respond to critical threats without AI

Nearly 7 in 10 (69%) organizations need artificial intelligence (AI) solutions in order to be able to respond to cyberattacks, while 61% can’t detect critical threats without AI, a new Capgemini Research Institute report indicates. A majority of executives (56%) state that their cybersecurity analysts cannot effectively detect and prevent

Read More
12 Jul 2019

Google defends letting human workers listen to Assistant voice conversations

In response to a report by Belgian public broadcaster VRT NWS showing that Google lets human workers listen to audio captured by Google Assistant software, the tech giant acknowledges that its language experts review 0.2 percent of all audio snippets recorded by its virtual assistant. Google justifies this practicing by

Read More
10 Jul 2019

OODA Network Interview: Dan Wachtler

This post is based on an interview with Dan Wachtler. It is part of our series of interviews of OODA Network members. Our objective with these interviews is to provide actionable information of interest to the community, including insights that can help with your own career progression. We also really like highlighting some of the great people that make our continued research and reporting possible. 

Read More
08 Jul 2019

AI for Fraud Detection to Triple by 202

The Association of Certified Fraud Examiners (ACFE) projects that the use of artificial intelligence (AI) for fraud detection will triple by 2021. At the moment a mere 13% of firms use AI for detecting fraud, but an additional 25% plan to introduce this technology over the next two years. The

Read More
04 Jul 2019

Cybersecurity Should Be Handled by Law Enforcement and Government, Report

A new Palo Alto Networks survey among respondents from Europe, the Middle East and Africa shows that one in four people (25%) believe that law enforcement agencies should be responsible for cybersecurity, while 28% point to the government. In addition, 26% of respondents said the would rather have artificial intelligence (AI)

Read More
24 Jun 2019

Researchers develop a technique to vaccinate algorithms against adversarial attacks

Researchers with CSIRO’s Data61 have developed a method to employ machine learning in order to “vaccinate” systems against adversarial attacks, which are attempts to tamper with machine learning models by feeding them malicious data. For instance, by distorting images in various ways, threat actors may be able to bypass surveillance

Read More
17 Jun 2019

US Lawmakers Hear Testimony on Concerns of Deepfakes

Last week, four intelligence experts warned the US House Intelligence Committee in a testimony about the growing risks resulting from the development of increasingly sophisticated deepfakes, which are images or videos doctored by artificial intelligence (AI) that show individuals saying and doing things they never said or did. Committee chairman

Read More
14 Jun 2019

Experts: Spy used AI-generated face to connect with targets

Research by the Associated Press has uncovered what seems to have been a state-run espionage campaign centering on a fake LinkedIn profile that managed to connect with various influential people in Washington. Moreover, experts believe that the people behind the campaign used artificial intelligence (AI) to generate the profile picture

Read More
13 Jun 2019

Preventing AI From Going Wrong – An OODA Network Webinar

OODA Network Members can watch the on demand version of our webinar on mitigating risks due to AI. During this webinar, the OODA Loop team discussed salient points of key research for our members, including an overview of key problem areas in AI, followed by a solutions framework. Challenge areas

Read More
10 Jun 2019

New user keystroke impersonation attack uses AI to evade detection

Security researchers at the Ben-Gurion University of the Negev (BGU) have developed a new attack technique that uses artificial intelligence (AI) to let compromised USB keyboards generate malicious keystrokes that match legitimate user behavior. Malboard, as the researchers have dubbed the attack, could enable threat actors to avoid detection by

Read More