Already a member?  Sign in to your account.

Become a member: Join here.

The research and reporting on AI at OODA is unique, unlike what you see elsewhere. Our research team is informed by real world implementation issues and the need to deliver results.

What is the value of an informed decision? At OODA Loop, we seek to surface decision intelligence that provides meaningful perspective for leaders and analysts looking to make the most informed decisions possible. Subscribing will give you access to research reports designed to improve your competitiveness and help you operate in a VUCA world, identify and respond to Gray Rhino risks, and find opportunities from advancements in emerging technology domains. Subscribers also receive access to our special continuously updated OODA C-Suite Report which is designed to provide busy decision-makers with a quick overview of the most critical topics for informed strategic planning.

OODA members can find research existing research and reporting a number of ways, including visiting our OODA Member Resources Page, our Sensemaking Series, or using the search function on our site.

As an OODA Network Member you help support our continued research into topics designed to optimize decision-making and inform your strategy. In return you receive full access to all specialized reports on the site including insights into Cybersecurity, Artificial Intelligence,  COVID-19, SpaceQuantum ComputingGlobal Supply ChainsGlobal Risk and Geopolitics, Advanced Technology, Corporate GovernanceDue Diligence Federal Markets.

This page serves as a dynamic resource for OODA Network members looking for Artificial Intelligence information to drive their decision-making process.


NATO and US DoD AI Strategies Align with over 80 International Declarations on AI Ethics

NATO’s release in October of its first-ever strategy for artificial intelligence is primarily concerned with the impact AI will have on the NATO core commitments of collective defense, crisis management, and cooperative security. Worth a deeper dive is a framework within the overall NATO AI Strategy, which mirrors that of the DoD Joint Artificial Intelligence Center’s (JAIC) efforts to establish norms around AI:   “NATO establishes standards of responsible use of AI technologies, in accordance with international law and NATO’s values.” At the center of the NATO AI strategy are the following six principles: Lawfulness, Responsibility and Accountability, Explainability and Traceability, Reliability, Governability, and Bias Mitigation.”


“AI Accidents” framework from the Georgetown University CSET

The Center for Security and Emerging Technology) (CSET)  in a July 2021 policy brief, “AI Accidents:  An Emerging Threat – What Could Happen and What to Do,” makes a noteworthy contribution to current efforts by governmental entities, industry, AI think tanks and academia to  “name and frame” the critical issues surrounding AI risk probability and impact.  For the current enterprise, as we pointed out as early as 2019 in Securing AI – Four Areas to Focus on Right Now, the fact still remains that “having a robust AI security strategy is a precursor that positions the enterprise to address these critical AI issues.”   In addition, enterprises which have adopted and deployed AI systems also need to commit to the systematic logging and analysis of AI-related accidents and incidents.


DHS Science and Technology Directorate (S&T) releases Artificial Intelligence (AI) and Machine Learning (ML) Strategic Plan Amidst Flurry of USG-wide AI/ML RFIs

An artificial intelligence security strategy (see “Securing AI – Four Areas to Focus on Right Now”) should be the cornerstone of any AI and machine learning (ML) efforts within your enterprise.  We also recently outlined the need for enterprises to further operationalize the logging and analysis of artificial intelligence (AI) related accidents and incidents based on an “AI Accidents” framework from the Georgetown University CSET. The best analysis is a sophisticated body of work on AI-related issues of morality, ethics, fairness, explainable and interpretable AI, bias, privacy, adversarial behaviors, trust, fairness, evaluation, testing and compliance.


AI-Based Ambient Intelligence Innovation in Healthcare and the Future of Public Safety

Disaster conditions will clearly be more impactful and more frequent due to the impact of climate change. The domestic terrorism threat stateside is becoming a constant, with the impact and frequency of growing domestic U.S. political instability and public safety incidents to be determined.

We will need systems that are monitoring these temporal, ephemeral ecosystems and providing insights and recommendations for real-time decision-making support and situational awareness analysis. What can AI-Based Ambient Intelligence Innovation in Healthcare teach us?


The Future of War, Information, AI Systems and Intelligence Analysis

The U.S. is in a struggle to maintain its dominance in air, land, sea, space, and cyberspace over countries with capabilities increasingly on par in all domains with that of the U.S.  In addition, information (in all its forms) is the center of gravity of a broad set of challenges faced by the United States. Information, then, is the clear strategic vector of value creation for the emergence of applied technologies to enable operational innovation. For the U.S., the desired outcome is continued dominance for another American Century.  For the Chinese, military capabilities usher in the dawn of a new technological superiority and, as a result, geopolitical and military dominance on the world stage.


Percipient AI CEO Balan Ayyar on the real world application of AI to critical missions

Balan Ayyar is the Founder and CEO of Percipient.ai, a Silicon Valley based artificial intelligence firm focused on delivering products and solutions for the most pressing intelligence and national security challenges.

Percipient.ai is headquartered in Santa Clara, CA, with offices in Reston, VA.

In this OODAcon discussion Balan describes his approach to how to apply AI to some of the nation’s most significant/serious issues.

OODA Loop Analysis

AI Security: Four Things to Focus on Right Now – This is the only security framework we have seen that helps prevent AI issues before they develop

A Decision-Maker’s Guide to Artificial Intelligence –  This plain english overview will give you the insights you need to drive corporate decisions

When Artificial Intelligence Goes Wrong – By studying issues we can help mitigate them

Artificial Intelligence for Business Advantage – The reason we use AI in business is to accomplish goals. Here are best practices for doing just that

The Future of AI Policy is Largely Unwritten – Congressman Will Hurd provides insight on the emerging technologies of AI and Machine Learning.

AI Will Test American Values In The Battlefield – How will military leaders deal with AI that may treat troops as expendable assets to win the “game”.

The AI Capabilities DoD Says They Need The Most – Savvy businesses will pay attention to what this major customer wants.

Insights from AI World on the State of AI in America – Based on our interactions during this yearly summit

Want more insight? Log in for the full report

This content is restricted to OODA Network members only. Members get access to all site content plus access to exclusive reports and events. Please consider becoming a member. For more information please click here. Thanks!

Already a member?  Sign in to your account.