We have integrated Center for Security and Emerging Technology (CSET) research into our OODA Loop research and analysis on topics ranging from artificial intelligence, dis- misinformation and information disorder (what we characterize as a crucial strategic need for National Cognitive Infrastructure Protection), technology talent retention, and the CHIPS Act, including the following OODA Loop research and analysis:
- “AI Accidents” framework from the Georgetown University CSET: In a July 2021 policy brief, “AI Accidents: An Emerging Threat – What Could Happen and What to Do,” makes a noteworthy contribution to current efforts by governmental entities, industry, AI think tanks and academia to “name and frame” the critical issues surrounding AI risk probability and impact. For the current enterprise, as we pointed out as early as 2019 in Securing AI – Four Areas to Focus on Right Now, the fact still remains that “having a robust AI security strategy is a precursor that positions the enterprise to address these critical AI issues.” In addition, enterprises that have adopted and deployed AI systems also need to commit to the systematic logging and analysis of AI-related accidents and incidents.
- CSET Introduces a “Disinformation Kill Chain”The CSET offers this sobering reality: “Artificial intelligence (AI), specifically machine learning (ML), is poised to amplify disinformation campaigns—influence operations that involve covert efforts to intentionally spread false or misleading information.” AI and disinformation are the timely subjects of a new series of policy briefs from the CSET. Part I of the series, AI and the Future of Disinformation Campaigns, Part 1: The RICHDATA Framework, was just released. Disinformation is not new, of course, but the scale and severity seem to have reached a zenith, broadsiding contemporary politics, public health policy, and many other domains. You name it, disinformation is in the mix scrambling truth and reality.
- Commissioners Krebs, Hurd et. al. Deliver Commission on Information Disorder Final Report: CSET’s “AI and the Future of Disinformation Campaigns” examines how artificial intelligence and machine learning can influence disinformation campaigns.
- CSET Releases Part II of Series: AI and the Future of Disinformation Campaigns: Part II of the CSET series “examines how AI/ML technologies may shape future disinformation campaigns and offers recommendations for how to mitigate them.” We offered an analysis of Part I of the series (CSET Introduces a “Disinformation Kill Chain”) earlier this month. Disinformation is not new, of course, but the scale and severity seem to have reached a zenith, broadsiding contemporary politics, public health policy, and many other domains. You name it, disinformation is in the mix scrambling truth and reality.
- Opportunities for Advantage: Maintaining a Strong “Stay Rate” in the U.S. for International STEM Ph.D. Graduates: Amidst coverage of exponential technologies and cognitive infrastructure, it is easy to take a purely technology-based perspective and neglect the human factor and the role of trained talent and future innovators in building the technology and platforms to solve the most pressing problems and address these ongoing threats. The human factor is also a national security issue, as access to and the training of a future generation of STEM talent is a centerpiece of our analysis of the Russian and Chinese use of human targeting to achieve security advantage in key emerging technologies by 2030 and Taiwan’s Five-year Quantum Computing and Talent Initiative.A recent CSET Issue Brief reminds us of the national security issues surrounding the creation of innovative courses of study and the retention of STEM talent in the domestic U.S.: “One of the United States’ greatest advantages in attracting STEM talent is the strength of its higher education system. U.S. universities remain a top destination for students around the world, particularly at the graduate level. International students accounted for more than 40 percent of the roughly 500,000 doctoral degrees awarded by U.S. universities between 2000 and 2019. Those who stay in the country after receiving their degrees strengthen the domestic STEM workforce and make valuable contributions to the economy and society.”
- What Next? Dr. Melissa Flagg and Dr. Jennifer Buss on the Chips and Science Act of 2022 (Part 1 of 2): CSET’s Senior Advisor Dr. Melissa Flagg and Dr. Jennifer Buss (CEO of the Potomac Institute for Policy Studies) discussed the policy, procurement, and contract management implications of the CHIPS and Science Act after its passage into law.
- What Next? Dr. Melissa Flagg and Dr. Jennifer Buss on the Chips and Science Act of 2022 (Part 2 of 2): In Part II of this OODA Loop interview, Senior Advisor Melissa Flagg and Dr. Buss discussed the operational capabilities required to provide true foundational leadership in the semiconductor industry of the future, the talent pipeline challenge, and scenario planning after the passage of the CHIPS and Science Act.
A 2022 CSET Report – “China’s Advanced AI Research: Monitoring China’s Paths to ‘General’ Artificial Intelligence”
According to CSET researchers, “China is following a national strategy to lead the world in artificial intelligence by 2030, including by pursuing ‘general AI’ that can act autonomously in novel circumstances. Open-source research identifies 30 Chinese institutions engaged in one or more of this project‘s aspects, including machine learning, brain-inspired AI, and brain-computer interfaces” and:(1)
…seeks to determine on the basis of publicly available information (“open sources”) who in China is taking what steps toward general artificial intelligence, as shown by overt expressions and other common measures. While typically conceived as “artificial general intelligence” or AGI, this paper rejects that ambiguous term, along with its usual association with human-level machine intelligence, in favor of an approach that recognizes diverse pathways to broadly capable AI that functions autonomously in novel circumstances.
The recent CSET report “China’s Advanced AI Research: Monitoring China’s Paths to ‘General’ Artificial Intelligence” examines what paths to general AI are available in principle, as a prelude to describing work underway in China to realize that capability. Three broad areas of Chinese research are identified as potentially germane: machine learning, brain-inspired AI research, and brain-computer interfaces (BCI). Data on the persons, institutions, and research making up this ecosystem is given as a foundation for downstream studies…” (2)
Building a China AI “Watchboard”
The authors of the report also “preview a pilot program…as a starting point for China-focused indications and a warning watchboard…that will track China’s progress and provide timely alerts:
The paper recounts the methodology used to build a database and prototype watch board that enable analysts to capture and potentially forecast China AI-related events. Data supporting this pilot is conditioned to accept accretions from follow-on research, done locally or with outside participants, on Chinese artificial intelligence, AI’s political uses, and other emerging Chinese technologies. The project aims to become or inspire a general foreign technology monitoring platform.
To structure this whiteboard effort, the CSET researchers realized “a working definition of “intelligence” [was] needed. Three definitions taken from the literature follow (with comments from this report’s authors in parentheses):
- “Intelligence measures an agent’s ability to achieve goals in a wide range of environments.” (Narrow AI is an oxymoron.)
- “Intelligence is the capacity of a system to adapt to its environment while operating with insufficient knowledge and resources.” (Intelligence is generative and transferable.)
- “This definition sees intelligence as efficient cross-domain optimization.” (A system that depends continuously on big data and over-specification is not
The CSET researchers go on to explore taxonomies and schemas which will structure the database behind the China AI Whiteboard pilot program effort:
(Box 1-3 Image Source: (cset.georgetown.edu)
Compute-intensive “big data” approaches to advanced (broadly capable) AI constitute both in China and globally the main focus of resources and attention.
Brain-computer interfaces (BCIs) use AI to improve their operation – while opening a path to cognitive enhancement. Neuromorphic chips that imitate brain structure promise faster processing speeds for algorithms that support general intelligence and are being adapted in China for use in BCIs.
The report “makes a case for monitoring China’s AI development as a bellwether for AI risk, understood in terms of global safety and U.S. national security (both 安全 in Chinese). The authors’ efforts to build a relational database to support discovery and implement a scalable indications and warnings (I&W) watchboard are described” in the report. (2)
We applaud the CSET China AI Watchboard pilot program. OODA Loop is all about pattern recognition, sensemaking, risk awareness, and informing decision-making processes – and the pilot project has great potential for innovation.
We offer here a few future-forward insights (differentiated based on our private sector business experience – as well as years of managing public/private strategic partnerships between university researchers and industry practitioners). These insights are not designed to single out the CSET researchers and the CSET China AI pilot program – but are directed to the policy research community in general.
OOD Loop implications, considerations, and recommendations include:
- Take cues from a vital trend in venture capital, the financial sector, and USG RE: innovation, collaboration, and funding at scale: Certain high-level stakeholders in the government and industry are shaking off the abysmal historical albatross that is the private sector’s lack of contribution to capital flows (‘skin in the game’) for deep tech and large-scale USG projects of any consequence (dating back to the success of the industrial mobilization around WWII and, arguably, no later than the role of the private sector in Eisenhower’s highway system infrastructure build-out). Yes, yes: there is the systems integrator’s historical relationship with the Department of Defense (DoD). OK. Fair enough. We are speaking of something entirely different here: a recent pattern of ambitious public/private strategic partnerships with actionable tactical goals and strategic foresight – launched only in the last two years. Based on our research, analysis, and feedback from the OODA Network membership, there is a new era of public/private partnership afoot, represented by:
- The proactive national technology strategy in the Great Power Competition with China articulated in the Special Competitive Studies Project (SCSP) 189-page report, Mid-Decade Challenges for National Competitiveness;
- The work of the MIT AI Policy Forum (taking AI principles to AI practice at a Global Scale); and
- The structure of America’s Frontier Fund and the Quad Investor Network
- Arguably, the bias here is industry practice over pure academic theory or the discovery of ‘high-level’ operational principles. Nevertheless, the opportunity exists for research policy organizations and small to mid-sized enterprises, as well as large companies, to replicate this public/private partnership approach, structured around specific projects (like the CSET China AI pilot program) or larger strategic initiatives.
- Policy Research Organizations like the CSET Should Not Reinvent the Wheel or Operate in a Product/Platform Design Silo: Time and time again, we have seen a university-affiliated project of great promise never leave the academic setting and sink into the valley of death. Strategic public/private partnerships with industry practitioners and with the right companies, as well as an all-of-government outreach program feels like the logical next step for policy research organizations – leveraging product design and development and lean startup methodologies as a ‘common ground’ language to engage industry.
- CSET is on to something and already has an operational model in-house: the open-source taxonomy and collaborative partnerships designed into CSET research on AI Accidents and the Artificial Intelligence Incident Database are instructional. The question is: how do policy research organizations extend this open-source data taxonomy structure and collaborative approach to include industry practitioners and companies and governmental organizations? Further: how can academia finally shake off a cultural bias against corporate/private sector sponsorship of research? In this era of simultaneous global crises and uncertainty, how can universities and policy research organizations shift to more of an ‘all hands on deck” approach to their outreach and collaboration efforts?
- We Need to Talk to Legal: University offices that manage sponsored research contracts and corporate legal departments need to commit to a new era of innovative contractual documents and strategic partnership which eliminates the tiresome “university legal department vs. corporate legal department territorial standoffs” which have significantly delayed or all together killed many an attempt at innovative public/private partnerships.
- Collective Intelligence and Community Building: The KPI (Key Performance Indicator) data visualization “community of practice” in the public and private sector(with industry-leading companies like Tableau and Domo) has years of success (at scale) with proven best practices (and a seasoned, passionate community of “power users” in a variety of academic disciplines and industry sectors). What the CSET is calling a Watchboard is ostensibly the equivalent of the “data visualization dashboard” that is the lingua franca in corporate settings. Opportunities for strategic partnerships during the pilot phase with this community of practice would yield interesting results – and create a context for iterative design inputs from a broad range of stakeholders early in the “Platform-as-a-Service” (PassS) design process.
- Emerging Technologies: The artificial intelligence (AI) and machine learning (ML) startup space has interesting AI and ML projects using large-scale social media and GPT-3 research datasets. Companies like Pulsar Platforms, Mergeflow, PrimerAI, Brandwatch (previously Crimson Hexagon), and Protagonist.io come to mind. Heuristics and innovative mental models are the design paradigm here, with network graph cluster visualizations and large-scale ML tracking datasets as the hybrid computational toolset. Blockchain technology ledgers and provenance designs are also promising in this arena of research.
- Multi-side Platforms and Strategic Partnership Ecosystems enable Speed and Scale: Nurture a “Community of Practice” as a function of the iterative platform design process, creating network effects and opportunities for scalable growth.
- The Intelligence Community and Cybersecurity Practitioners are logical strategic partners as well: Realistically, specific to the CSET China AI Watchboard pilot, open source collaboration is probably not an option as a data sharing and platform architecture (due to the national security implications of some of the classified information which may need to be accessed over time to grow the platform into the robust intelligence tracking platform envisioned by the researchers). As a result, focusing on the subject matter expertise and institutional knowledge of the IC and cybersecurity private sector practitioners and companies is a logical next step.
- Ongoing CSET Research on China’s Advanced AI Research: We look forward to the next steps in this research project. The authors of this report are preparing a follow-up report, “China’s Cognitive AI Research,” which is in the preparation phase.
If the end goal of the CSET China AI Whiteboard product development cycle is to “inspire a general foreign technology monitoring platform” – inspiration should be structured as a multi-sided platform and ecosystem of individuals, companies, and policy organizations. Inspire early and often – and in a fashion that optimizes the opportunity for a “technology monitoring platform” that is technology agnostic, build for open-source collaboration, [cyber]secure, and scalable.
OODA is here to help, leveraging our cybersecurity domain expertise and the ability to bring a wide spectrum of public/private stakeholders together to explore the potential of the CSET China AI Watchboard pilot program. Contact us here.
Representatives from some of the abovementioned innovative companies (and members of the CSET team we hope) will be in attendance at OODAcon 2022 – The Future of Exponential Innovation & Disruption – where we will be discussing China’s AI Strategy, Next Generation Technology Requirements, Funding Opportunities for Innovation, and Digital Sovereignty, Blockchain and AI.
OODAcon is about understanding the future and developing the resiliency to thrive and survive in an age of exponential disruption.
Society, technology, and institutions are confronting unprecedented change. The rapid acceleration of innovation, disruptive technologies and infrastructures, and new modes of network-enabled conflict require leaders to not only think outside the box but to think without the box.
The OODAcon conference series brings together the hackers, thinkers, strategists, disruptors, leaders, technologists, and creators with one foot in the future to discuss the most pressing issues of the day and provide insight into the ways technology is evolving. OODAcon is not just about understanding the future but developing the resiliency to thrive and survive in an age of disruption.
OODAcon is the next-generation event for understanding the next generation of risks and opportunities.
OODA Network Members receive a 50% discount on ticket prices. For more on network benefits and to sign up see: Join OODA Loop
Please register to attend today and be a part of the conversation.