ArchiveDisruptive TechnologyOODA Original

The Executive’s Guide To Artificial Intelligence: What you need to know about what really works and what comes next


The megatrend of Artificial Intelligence is transforming the algorithms of business in exciting ways. This reference, aimed at the business decision-maker, will help you make the most of AI in your organization. It provides clear articulations of fundamental concepts, succinct examples of highly impactful use cases, and tips you can put in place to ensure your AI projects stay on track to deliver value. We keep this online reference updated so you will always have access to the best of our thoughts.

This reference is part of a series. Follow it with our special report on When AI Goes Wrong and our report on Artificial Intelligence for Business Advantage.


What Is Artificial Intelligence?

The father of AI John McCarthy defined AI this way:

“Artificial Intelligence is the science and engineering of making intelligent machines”

This is very good from a science and technology perspective. But from a business and enterprise technology perspective we need to turn that around a bit and focus on the end result. OODA defines AI as:

“Artificial Intelligence is the application of thinking machines to real world problems.”

This definition is important since it emphasizes application to real world business and mission needs.

How do you apply thinking machines to real world problems? This is as much art as science, which means leaders must track both technical and non-technical elements of AI.

Technical components of AI solutions include:

  • Analytic Algorithms (including Machine Learning, Deep Learning)
  • Natural Language Processing
  • Sensors
  • Robotics for non-computational actions
  • Data Management
  • Hardware architectures
  • Technical security measures

Non technical components of AI solutions include:

  • New business strategies
  • Cybersecurity and risk policies
  • Ethics
  • Legal and regulatory regimes
  • Training and Testing
  • Operation and maintenance

We will dive deeper into these many technical and non-technical domains in in this special research report, focusing on the “so-what” and “what’s next” for the critical topic of Artificial Intelligence.

The History of Artificial Intelligence

AI decisions in organizations are about today and tomorrow, not the past.  This sentiment almost led us to leave out this entire section. You can find plenty of other references to the history of AI. In the end we decided to go light on the history, but to capture the lessons from AI’s history that can inform your decisions today.

The writings of the ancients in Greece and China both point to a long running dream of humanity to create intelligent beings. Perhaps the earliest human stories of what would now be recognized as AI were the Greek myths of Talos, a thinking robot built to defend Crete.  The first science fiction book, Frankenstein, turned dreams of creation into a genre of modern entertainment. Now human created artificial intelligence is a plot component of most science fiction. There is a good and bad to this element of the history of AI. It is great that human imagination and creativity is being applied here. But the bad news is that this leads some to believe that the wild visions of super smart human-like beings existing in our computers is a near-term reality

When it comes to scientific research into computer based AI, Alan Turing began writing about concepts of reasoning machines in 1936 and by 1950 was writing and lecturing on “Computing Machinery and Intelligence.” His ideas continue to shape research. The term Artificial Intelligence was first used by researchers led by Marvin Minsky and John McCarthy in 1956. Since then there have been several periods of exciting developments followed by flame outs, and disillusioned users. These many periods where progress slowed to nothing are known in the community as “AI Winters” because there was no conceptual or scientific advancements and no real AI capabilities being utilized.

This gets to the important lessons from the history of AI.

  1. Human imagination is a driver of creativity. Embrace it! We are nowhere near the fantastic AI of science fiction, but dreamers in this field will help us imagine solutions that will greatly enhance business and society. Just remember that wild visions of what AI can do must be tempered with reality.
  2. Previous failures of AI came about for many reasons, including few working algorithms, not enough computing power, and not enough data. Today computing power is sufficient, the algorithms are improving every day, and data is exponentially being generated and stored for analysis..
  3. Since business has proven that AI can deliver results, we should not expect another AI Winter. Expect an AI Cambrian Explosion of capabilities.

AI Use Cases

There are more AI use cases than we could ever capture  in a single document. We can, however, capture broad categories of AI solutions that are working today. We selected the use cases below because they are already impactfully deployed and each can help you conceptualize what is realistic for your own AI based solutions:

  • Health Care: AI based tools are helping medical professionals analyze images (including x-Rays, CT Scans, PET scans). Solutions have been fielded in hospitals that help physicians diagnose and monitor patient care. There have also been some noticeable failures in health care related AI projects (those we know about were massive “moon shot” projects that seemed to aim to do far more than was capable at the time).
  • Financial Analysis: AI capabilities are being used to help firms understand their cash flow in ways that can translate to optimized spending and reduction of waste. Technical analysis of publicly traded firms is also aided by modern AI. Many financial services firms are leveraging expert systems and newer forms of AI to assess credit.   Algorithmic trading programs continue to advance AI concepts and technologies like sentiment analysis to improve the performance of their host investment funds.
  • News and Media: With the overwhelming amount of information in the modern age news sources are using AI enabled search and discovery capabilities to discover and vett the right information. In one extreme case, a news station in China has produced an AI enabled reporter. Several firms are now offering AI enabled news gisting services which extract key facts from multiple sources to create an original factual story.
  • Legal services: Law firms are able to use new AI enabled tools to rapidly search large digital archives for court cases. This field of “e-discovery” has caused changes to law firm staffing profiles.
  • Direction Finding: The proliferation of mobile devices with GPS and maps provided a good foundation for innovation around AI, which can now help businesses and individuals do optimal route planning, including planning around traffic.
  • Shopping and Retail: The powerhouse Amazon uses AI to improve search results but also to display items you may be interested in. Traditional retail establishments are increasingly using AI to better control and pre-position inventory, improve purchasing, and optimize store locations.
  • Know Your Customer: Any business which has customers (which should be all of them), can now leverage large quantities of internal data and appropriate external data to form comprehensive views on current and future customer needs  Developing detailed profiles of existing customers also allows for machine learning algorithms to “find” new customers that match your increasingly granular and dynamic customer profile.
  • Insurance: The application of AI to insurance helps firms better assess risk, evaluate rates and better prepares potential clients for disaster response.
  • Marketing: AI has been proven to differentiate firms in marketing, enabling more optimized targeting of marketing messages via the right channel to get to the right buyer.
  • Law Enforcement: Predictive policing sounds like science fiction. But it is really the smart application of AI to enable proactive police decisions, which may translate into prioritized patrol areas or other pre-crime actions.
  • Military: Smart use of AI holds the promise of reducing civilian casualties in war and improving the ability of democracies to defend themselves. The more complete view is that AI also enables adversaries to improve their military capabilities. The AI arms race is on.
  • Fraud Detection: Fraud comes in many forms, and since criminals are innovative and creative there will always be new methods being developed. Any firm or individual can be a target. AI is enabling fraud detection methods that look for anomalous actions and alert prior to damage being done.
  • Cybersecurity: Cyber attacks are as relentless as fraud and fraud often manifests through cyber attacks, that involve automatically propagating malicious code that can act fast. AI enabled security software seeks to learn what is normal and alert on or even stop activity that is not normal.
  • Robotics: The ability to smartly move in 3D space takes a variety of AI techniques. Even the simplest robot or drone leverages advanced algorithms. The more complex ones require a wide range of methods to safely accomplish their tasks.
  • Information Analysis: This is such a broad category. We saved it for last because it is such an open use case and can apply to any industry anywhere. The most widely known and examined information analysis use cases are those dealing with search, like the algorithms behind Google’s ability to index and retrieve data on all websites. But AI capabilities in information analysis are also upgrading what was once traditional “business intelligence” solutions.

The Limits of AI

The use cases above are really just the tip of the iceberg. The capabilities of AI are being applied to just about every industry and every sector of government.

But there are limits. Many firms have kicked off large scale projects leveraging AI that have failed miserably. In most cases these failures were due attempts to try moonshot type approaches designed to do things that have never been done before. In other cases, organizations tried to get AI to do things best done by humans.

In general, the best uses of AI today will involve a good mix of human creativity and imagination and simple, specific implementations of AI. There have been examples of AI being used to create art and even replicate music, but these are guided by creative humans and are really just novelties at this point. AI comes nowhere near the ability of humans to create and imagine. When original ideas are needed, the human is key.

Humans are also key to fields where interacting with others with compassion is needed. We will return to this point later in this reference.

The Key Technologies of Artificial Intelligence

Every use case described above is requires software working over data. By understanding the key types of approaches (the software and how it is programmed) you can better understand the ways these use cases are accomplished and consider new ones more relevant for your organization.

Here are the key methods:

  • Machine Learning: The automated training and fitting of models to data. The most widely used AI related technology, either as a stand alone solution or the front end of a more complex solution. This is a broad technique with many methods and is at the core of most AI. Methods commonly taught and applied in ML solutions all have different strengths and weaknesses and part of the art of ML is knowing which applies to the need at hand.
  • Neural Networks: Considered a more complex form of Machine Learning, this approach uses data flow mappings similar to artificial “neurons” to weigh inputs and related them to outputs. This advanced parallel processing approach to AI is the technique that most closely mimics the human brain to date.
  • Deep Learning: Highly evolved neural networks with many layers of variables and features. Important to most modern image and voice recognitions and for extracting meaning from text. Deep learning models use a technique called “back propagation“ to optimize the models that predict or classify outputs, which adds to complexity of the end model. The end model may have so many 1000’s of variables that no human can really understand how the model functions or how a conclusion was arrived at.
  • Natural Language processing: Analyzes and understands human speech and text. Used in modern applications of speech recognition including chatbots and intelligent agents. NLP also requires training data, in this case the output is knowledge about how language relates, often referred to as a “knowledge graph” for a particular domain. This topic is critically important to modern business and is the subject of our report on What Leaders Need to Know About the State of Natural Language Processing
  • Rule-based expert systems: Sets of logical rules derived from the way people actually work. Used in many processes where sets can be clearly defined. This was the dominant form of AI in the past and is still around today, but is really just complex programming. Imagine a large number of “if-then” statements in a program, but in this case the rules were built by domain experts.
  • Robots and Robotics: Automation of physical tasks. Primarily used in factory and warehouse tasks but growing use in healthcare, small businesses, and homes. Training data for robots is also critically important, but in this case the training data may include location for movement or a wide variety of expected changes in the environment.
  • Robotic Process Automation: Automation of structured digital tasks in the enterprise or factory settings. This is a highly evolved form of scripting actions. It is a combination of software and workflows built to help automate business processes. RPA is at its best when it provides users with the benefits of other AI capabilities like Machine Learning.
  • Computer Vision: This has been a field of study since the formation of the discipline of Artificial Intelligence. Big breakthroughs came about when large amounts of data and processing power at Google enabled researchers to field algorithms that can work at scale to identify what is in an image. Computer vision algorithms and libraries are now widely deployed. They are also easily deceived, unfortunately.

Related Terms and Concepts

  • Supervised Learning: The most common type of training for AI models. Data is labeled by humans so the algorithm can be taught based on what was established by humans. This is very similar to older techniques of statistics like regression analysis. Once a model has been developed using supervised learning, it can be used with new data to provide predictions. This is called “scoring”. Training models on labeled data generally takes large quantities of data that have known outcomes, and in many use cases the outcome that is being sought is actually a rare occurrence (this is called a “class imbalance”).
  • Unsupervised Learning: This is the development of AI models in ways that detect patterns in data that are not labeled and results are not known.
  • Training Data: The data used for the development of the model. This is often validated using another subset of data for which the outcome to be predicted is known.
  • AI Engineering: For mission critical systems such as those that require extreme accuracy or exreme safety or rapid military decisions special care must be taken to engineer end to end systems that are designed to work, always. This type of system requires engineering and this new discipline is known as AI Engineering.

Data and Artificial Intelligence

Every AI approach has something in common. All require data. In most cases, the more data the better. The one caveat here is that the quality and the integrity of the data are also incredibly important as low-quality or untrusted data could result in unexpected or even negative outcomes.

As previously noted, a key reason why AI is here to stay is that the world is producing so much data. Without AI there would really be no way to keep up and derive value from the masses quantity of data being produced.

Regardless of your business or industry, your approach to AI requires a good approach to how you manage your data. Some best practices include:

  • Understanding your internal data holdings
  • Leveraging best practices to protect your data from compromise or manipulation
  • Providing governance and policy for management of your data
  • Ensuring good backup strategies for your data
  • Preparing data for analysis through use of capabilities like an “Enterprise Data Hub” or data store where new capabilities can be run over all data holdings. For large organizations this can be a daunting task, but at a minimum, data should be cataloged and understood.
  • Smart acquisition of other data relevant to your use cases and needs.
  • Methods for verifying and validating data

The Hardware of AI

The physical computer used by AI solutions should be optimized for what you are trying to do. Some AI solutions can run on the chip in your mobile devices. These can be very helpful. Others require the high end power of a modern desktop while some implementations need tens of thousands of servers.

The right solution is going to depend on the algorithms and data needed to achieve an expected result.

Solutions that require extensive movement of data in and out of memory and back and forth to the processor will frequently leverage a specialized type of processor called a GPU. GPU stands for Graphics Processor Unit. These are the components that makes images and videos render fast and realistically in games. They work well for images and video because they can move large quantities of data directly from memory. This capability has been leveraged in most new breakthrough AI capabilities, including most deep learning and image recognition solutions.

Key takeaway here: Ensure your AI solutions are leveraging the right hardware architecture.

Conclusions and Recommendations:

The essence of what you need to know about AI is that success requires leadership, and leaders should maintain a fluency in the key concepts articulated here.

We capture more recommendations in our two follow-on reports:

Related Reading:

Explore OODA Research and Analysis

Use OODA Loop to improve your decision making in any competitive endeavor. Explore OODA Loop

Decision Intelligence

The greatest determinant of your success will be the quality of your decisions. We examine frameworks for understanding and reducing risk while enabling opportunities. Topics include Black Swans, Gray Rhinos, Foresight, Strategy, Stratigames, Business Intelligence and Intelligent Enterprises. Leadership in the modern age is also a key topic in this domain. Explore Decision Intelligence

Disruptive/Exponential Technology

We track the rapidly changing world of technology with a focus on what leaders need to know to improve decision-making. The future of tech is being created now and we provide insights that enable optimized action based on the future of tech. We provide deep insights into Artificial Intelligence, Machine Learning, Cloud Computing, Quantum Computing, Security Technology, Space Technology. Explore Disruptive/Exponential Tech

Security and Resiliency

Security and resiliency topics include geopolitical and cyber risk, cyber conflict, cyber diplomacy, cybersecurity, nation state conflict, non-nation state conflict, global health, international crime, supply chain and terrorism. Explore Security and Resiliency


The OODA community includes a broad group of decision-makers, analysts, entrepreneurs, government leaders and tech creators. Interact with and learn from your peers via online monthly meetings, OODA Salons, the OODAcast, in-person conferences and an online forum. For the most sensitive discussions interact with executive leaders via a closed Wickr channel. The community also has access to a member only video library. Explore The OODA Community

Bob Gourley

Bob Gourley

Bob Gourley is the co-founder and Chief Technology Officer (CTO) of OODA LLC, the technology research and advisory firm with a focus on artificial intelligence and cybersecurity which publishes Bob is the co-host of the popular podcast The OODAcast. Bob has been an advisor to dozens of successful high tech startups and has conducted enterprise cybersecurity assessments for businesses in multiple sectors of the economy. He was a career Naval Intelligence Officer and is the former CTO of the Defense Intelligence Agency. Find Bob on Defcon.Social