NIST Makes Available the Voluntary Artificial Intelligence Risk Management Framework (AI RMF 1.0) and the AI RMF Playbook
The National Institute of Standards and Technology (NIST) has released the Artificial Intelligence Risk Management Framework (AI RMF 1.0), a guidance document for voluntary use by organizations designing, developing, deploying, or using AI systems to help manage the many risks of AI technologies. The AI RMF was produced in close collaboration with the private and public sectors. NIST has also released the AI RMF Playbook, which suggests ways to navigate and use the AI RMF to incorporate trustworthiness considerations into the design, development, deployment and use of AI systems – including suggested actions, references, and documentation guidance to achieve the outcomes for the four functions in the AI RMF: Govern, Map, Measure, and Manage.
Expert and novice cybercriminals have already started to use OpenAI’s chatbot ChatGPT in a bid to build hacking tools, security analysts have said. In one documented example, the Israeli security company Check Point spotted(Opens in a new window) a thread on a popular underground hacking forum by a hacker who
The age of continuous crisis has placed incredible demands on intelligence analysts. Automation has not kept up in helping analysts meet the ever expanding demands for quality intelligence. Search tools for the enterprise are supposed to help intelligence analysts, but almost all broken and underperform, especially over unstructured text.
A new artificial intelligence bot that has quickly become very popular could be utilized by cybercriminals for nefarious purposes, including learning how to craft attacks and write ransomware. ChatGPT was released last month and has already surpassed one million users on the platform. The chatbot leverages vast volumes of data
Infosecurity Europe’s community of cybersecurity leaders predict that the global political unrest from this year will continue to plague 2023 and cause serious issues for the security industry. However, the leaders believe that stricter regulations and new developments in areas such as artificial intelligence, machine learning, and more will mean
The Second International Counter Ransomware Initiative (CRI) Summit held recently at the White House turned the spotlight on the need to counter cybercriminal and other threat actors’ efforts to use the cryptocurrency ecosystem to garner payments and mask illicit activity. Now more than ever, financial investigators need to use open-source intelligence
Open the Pod Bay Door – Resetting the Clock on Artificial Intelligence
Panel Description: Artificial intelligence is like a great basketball head-fake. We look towards AI while we pass the ball to machine learning. But, that reality is quickly changing. This panel taps AI and machine learning experts to level-set our current capabilities in the field and define the roadmap over the next five years.
We have integrated Center for Security and Emerging Technology (CSET) research into our OODA Loop research and analysis on topics ranging from artificial intelligence, dis- misinformation and information disorder (what we characterize as a crucial strategic need for National Cognitive Infrastructure Protection), technology talent retention, and the CHIPS Act.
The recent CSET report “China’s Advanced AI Research: Monitoring China’s Paths to ‘General’ Artificial Intelligence “examines what paths to general AI are available in principle, as a prelude to describing work underway in China to realize that capability. The report authors also “preview a pilot program…as a starting point for China-focused indications and a warning watchboard…that will track China’s progress and provide timely alerts.”
Payments giant Mastercard today is launching Crypto Secure, a new software product designed to help banks and other card issuers identify and block suspicious transactions from crypto exchanges, according to a CNBC report. A similar system is already in place for Mastercard’s fiat transactions, with the technology now expanding to
The MIT AI Policy Forum (AIPF) is a global initiative at The MIT Schwarzman College of Computing, which was launched in 2018. The Blackstone Group Chairman Stephen A. Schwarzman, donated $350 million of the $1.1 billion of funding committed to the school, which is the “single largest investment in computing and AI by an American academic institution.” What sets the AIPF apart from all other organizations dedicated to AI research and policy is its commitment to global collaboration moving from AI principles to AI practice. The leadership at the AIPF is committed to making a tactical impact.
Simply put: It is time for a “Decide and Act” phase after the collective “Observe, Orient” analysis phase which has been applied to certain aspects of mission-critical social and ethical issues such as privacy, fairness, bias, transparency, and accountability. To echo the AIPF: “Now, it is time to take the next step.”