Since authoring the seminal Cyberwar is Coming! in 1993, Dr. John Arquilla has been on the forefront of thinking about the digital domain and the conflicts that now occurring on a daily basis. His expertise on the subject of “netwar” and “swarming” tactics have been revolutionary, serving as a military consultant and now teaching courses on national security and defense analysis. In Cyberwar is Coming!, Dr. Arquilla understood that the digital world and the information world were inherently tied together, a relationship that would only intertwine and strengthen the more advanced and the technology became. Indeed, the multiple and diverse influence operations that transpired during the 2016 U.S. presidential elections proved testament to his thoughts, showing how cyber-enabled information campaigns could “disrupt, damage, or modify what a target population ‘knows’ or thinks it knows about itself and the world around it.” These remarks were very prescient indeed, considering they were written more than 20 years before the U.S. victimization of such campaigns by Russian and other foreign interests.
DHS Science and Technology Directorate (S&T) releases Artificial Intelligence (AI) and Machine Learning (ML) Strategic Plan Amidst Flurry of USG-wide AI/ML RFIs
An artificial intelligence security strategy (see “Securing AI – Four Areas to Focus on Right Now”) should be the cornerstone of any AI and machine learning (ML) efforts within your enterprise. We also recently outlined the need for enterprises to further operationalize the logging and analysis of artificial intelligence (AI) related accidents and incidents based on an “AI Accidents” framework from the Georgetown University CSET. The best analysis is a sophisticated body of work on AI-related issues of morality, ethics, fairness, explainable and interpretable AI, bias, privacy, adversarial behaviors, trust, fairness, evaluation, testing and compliance.
As far as governmental contributions, what should be encouraging to industry players is the fact that AI/ML strategy is now very actionable at the policy, research and development and strategic partnership level across the USG.
In this OODAcast we examine lessons learned as a startup founder and insights into the future of technology with Amr Awadallah. Amr Awadallah is widely known as a founder of Cloudera. Prior to that he was working on extreme scale data solutions for Yahoo. Most recently he was VP for Developer Relations at Google Cloud. Amr has a BS in EE from Cairo University, an MS in Computer Engineering from Cairo University, and a PhD EE from Stanford University. His experiences in tech and company leadership put him in the perfect position to help bring actionable insights to decision-makers today.
The Center for Security and Emerging Technology) (CSET) in a July 2021 policy brief, “AI Accidents: An Emerging Threat – What Could Happen and What to Do,” makes a noteworthy contribution to current efforts by governmental entities, industry, AI think tanks and academia to “name and frame” the critical issues surrounding AI risk probability and impact. For the current enterprise, as we pointed out as early as 2019 in Securing AI – Four Areas to Focus on Right Now, the fact still remains that “having a robust AI security strategy is a precursor that positions the enterprise to address these critical AI issues.” In addition, enterprises which have adopted and deployed AI systems also need to commit to the systematic logging and analysis of AI-related accidents and incidents.
The Black Hat and Def Con cybersecurity events are the most highly anticipated of the year. Each event had an in-person component this year and OODA CEO Matt Devost provides his observations from each event.
The recent exposure of NSO, the Israeli company that developed the Pegasus mobile phone spyware, has again brought to the forefront private companies that develop and sell their technology to “only” governments and licensed law enforcement entities for the purposes of spying and surveilling targets of interest. While ostensibly Pegasus can be used against criminal and terrorist elements, recent revelations show how such technology can be bent to the will of its operators. In this instance, the spyware was sold to authoritarian regimes to target human rights activists, journalists, religious figures, academics, and attorneys, among others, with approximately 50,000 individuals being targeted by the spyware since 2016, according to a data leak. Per one report, Pegasus malware targeted as many as 14 heads of state, as well, implying a cyber espionage angle to the malware’s use. An expose on NSO that manufactured Pegasus revealed that the company cited “cyberwarfare” as its business model. There seems little doubt as to the intent of Pegasus and how it has been marketed to potential clients.
If you have not discovered The Everyday Astronaut (a website and YouTube channel), it should be on your sensemaking radar. Gone are the days of dependence on traditional media outlets to watch seminal space launches or returns at a fixed time in an exclusively nation-state based space race. Everyday Astronaut is an example of the evolution of “space fan culture,” a case study in far how democratized media has come in its ability to educate and inform on even the most complex of scientific and technical topics, and a window into the democratization of space travel and how commercial space efforts will be covered by the media in 2021 and beyond.
Elon Musk is clearly a fan, spending over two-hours recently with Everyday Astronaut host Tim Dodd at the SpaceX Starbase Facility in Boca Chica, Texas. An especially interesting takeaways from the tour is Musk sharing the five steps of his engineering process, noting that “what I am trying to do is have everyone implement rigorously the five-step process.”
Sir David Omand is one of the most respected intelligence professionals in the world and author of the book How Spies Think: Ten lessons in intelligence. His career in intelligence began shortly after graduating from Cambridge in 1969 when he joined the UK’s GCHQ (Government Communications Headquarters). He would later become the director of GCHQ. He also served as the first UK Security and Intelligence Coordinator, the most senior intelligence, counter-terror and homeland security position in the UK.
In this OODAcast we discuss lessons in leadership from his time in the intelligence service and his views on the current threat environment, including threats to nations, corporations and citizens of the free world. We also examine how his time in intelligence informed his own models for understanding and analyzing complex situations and how this motivated him to write How Spies Think.
A critical component of scenario planning is strategic communication. Interestingly, according to The National Oceanic and Atmospheric Administration (NOAA), National Integrated Drought Information System (NIDIS): “Drought communication is important not only for informing people about current drought conditions, but also for providing education and encouraging people to take adaptation action.” While drought conditions are traditionally the domain of government agencies such as emergency and disaster management, public safety and public health, private sector companies should now look at drought conditions as a function of the strategic challenges brought on by climate change.
They’re not cybersecurity experts, but they did stay at a Holiday Inn Express last night. Because we have no common body of knowledge from which to explore and learn from prior art, you can predict like the seasons when another cohort of professionals from other disciplines will attempt to tell