The Association of Certified Fraud Examiners (ACFE) projects that the use of artificial intelligence (AI) for fraud detection will triple by 2021. At the moment a mere 13% of firms use AI for detecting fraud, but an additional 25% plan to introduce this technology over the next two years. The
A new Palo Alto Networks survey among respondents from Europe, the Middle East and Africa shows that one in four people (25%) believe that law enforcement agencies should be responsible for cybersecurity, while 28% point to the government. In addition, 26% of respondents said the would rather have artificial intelligence (AI)
Researchers with CSIRO’s Data61 have developed a method to employ machine learning in order to “vaccinate” systems against adversarial attacks, which are attempts to tamper with machine learning models by feeding them malicious data. For instance, by distorting images in various ways, threat actors may be able to bypass surveillance
Last week, four intelligence experts warned the US House Intelligence Committee in a testimony about the growing risks resulting from the development of increasingly sophisticated deepfakes, which are images or videos doctored by artificial intelligence (AI) that show individuals saying and doing things they never said or did. Committee chairman
Research by the Associated Press has uncovered what seems to have been a state-run espionage campaign centering on a fake LinkedIn profile that managed to connect with various influential people in Washington. Moreover, experts believe that the people behind the campaign used artificial intelligence (AI) to generate the profile picture
OODA Network Members can watch the on demand version of our webinar on mitigating risks due to AI. During this webinar, the OODA Loop team discussed salient points of key research for our members, including an overview of key problem areas in AI, followed by a solutions framework. Challenge areas
Security researchers at the Ben-Gurion University of the Negev (BGU) have developed a new attack technique that uses artificial intelligence (AI) to let compromised USB keyboards generate malicious keystrokes that match legitimate user behavior. Malboard, as the researchers have dubbed the attack, could enable threat actors to avoid detection by
A new study by Senseon indicates that the vast majority of small and medium businesses (SMBs) think AI solutions could boost their cybersecurity (81%) and improve their workflow (76%). However, only 4% of small and mid-sized firms are actually using AI. The main reasons that SMBs are hesitant to implement
A new AI algorithm developed by a team of researchers is capable of generating fake and misleading news stories that seem more plausible than those created by humans, a new academic paper[PDF] shows. Moreover, the AI only needs to be fed a headline in order to write a full article.
5G telecommunication networks are bound to revolutionize the world, but the revolutionary aspects of this technology also result in major security challenges for companies and governments alike, security experts warn. While current telecom networks, such as 4G, are hardware-centric constructions, 5G networks will be built on a more software-centric architecture.