The revolution in Generative AI is one of many factors changing the cybersecurity equation. OODAcon 2023 examined this topic by bringing together leaders who have been making a difference in this domain for years.
While the field of cybersecurity is easily over 30 years old, it continues to be a nexus for significant risk in emerging technologies. This session explored how threat actors are adapting to new environments and adopting disruptive technologies to achieve their goals. OODA network members examining this topic at OODAcon included:
Kristin Del Rosso, Field CTO, Public Sector, Sophos
Juan Andres Guerrero-Saade, Sr Director of SentinelLabs, SentinelOne
Sarah Jones, Senior Threat Intelligence Analyst, Microsoft
Visi Stark, cofounder, The Vertex Project
“…threat actors’ evolution highlights the importance of staying vigilant and adapting defenses to counter their evolving tactics.”
The cyber threat landscape has evolved significantly over the last 30 years: Cybersecurity was not even considered its own industry vertical three decades ago, highlighting its rapid growth and importance. Trust relationships and the complexity of systems have played a crucial role in shaping the landscape – highlighting the importance of understanding and managing emerging risks in the cyber domain. There has been a progression from script kiddies to specialized ecosystems, with attackers adapting to new technologies quickly. Overall, the cyber threat landscape has witnessed a dynamic and complex interplay between attackers and defenders, with technologies favoring attackers in some cases.
Macro v. micro changes in the global cybersecurity ecosystem: Macro changes include the adoption of Everything as a Service (XaaS) models and generative AI, which have impacted the entire crime ecosystem. The use of AI has also led to an increase in the scale of attacks and discussions about the future of cyber warfare. Additionally, the management of infrastructure – particularly in the cloud space – has forced significant changes. On the other hand, micro changes involve the evolution of threat actors and their tactics, such as the shift from open source tools to custom malware and the adoption of new technologies. There has been a continuous back-and-forth between script kitties and custom exploits. Overall, the cybersecurity ecosystem is constantly evolving, with actors leveraging new technologies and defenders adapting to protect against emerging threats.
The evolution of threat intelligence is still in its infancy, with specialization in threat clustering and attribution: The approach to vulnerabilities has changed dramatically over time, focusing on making the environment where code lands more difficult. The approach to vulnerabilities has shifted from shaming to assuming that every system has vulnerabilities, encouraging a “make it more difficult to exploit” mindset. Actors have switched between custom exploits and open source tools, while niche actors provide infrastructure for state actors.
The specialization of threat clustering and attribution in cybersecurity has been on a Bell curve of development since the mid 1990’s: The discussion touched upon the importance of context and trust in evaluating cybersecurity information, including threat clustering and attribution.
The Chinese threat actors started with poor phishing attempts but have since pivoted to custom tools and continue to evolve: The evolution of Chinese threat actors in cyber has seen a shift from being initially bad at phishing to becoming more agile and sophisticated. In the mid-2000s, they found a niche and started using custom tools, and their developer shops grew in size. Chinese threat actors have continued to evolve, switching to all custom malware and providing infrastructure to state actors. They are known for quickly adopting new technologies and have been observed using command and control (C2) servers. The Chinese threat actors’ evolution highlights the importance of staying vigilant and adapting defenses to counter their evolving tactics.
” Geopolitics and cybersecurity were interconnected, and cybersecurity always had a role to play in the geopolitical landscape.”
It was noted that Pop substrates, along with Pop service providers, form part of super specialized ecosystems within the cyber industry. Overall, the discussion recognized the significance of Pop substrates in shaping the cybersecurity landscape and the need for continuous adaptation to counter emerging threats.
Access as a service, both on offense and defense, was discussed in the context of managing emergent risks in the cyber world: It was noted that there has been good progress in discussing issues related to access as a service. Overall, the conversation emphasized the ongoing developments and challenges in the realm of access as a service in both offensive and defensive contexts.
There has been progress in how issues are discussed with a broader audience outside of subject matter experts within the cyber community: The Washington Post openly discussing its hack experience – “being owned” – was a healthy development in broadening the type of organization that openly addresses their hacking experience. It was important enought to have been impacted.
In the mid-1990s, the field of cybersecurity was already over 30 years old and continued to be a significant risk in emerging technologies: Threat actors were adapting to new environments and adopting disruptive technologies to achieve their goals. Geopolitics and cybersecurity were interconnected, and cybersecurity always had a role to play in the geopolitical landscape. Specific developments and incidents in this period included Moonlight Maze ( the server is now at the spy museum); it was also the “script kitties” era; grabbing exploits from newsgroups; .gov and .mil targets were also ascendant behaviors during this period – which eventually became the Russian-born Turla APT.
Most recent 3 to 5 years in Cyber: It has been a period of back to open source tools and a focus on mazimizing ROI by getting into systems and scoping the lay of the land before an exploit. Despite significant funding, many cybersecurity professionals still rely on open-source tools. There were many mentions of “script kiddies” during this discusion – which refer to low skillset individuals who used existing scripts and tools to conduct attacks. This practice has only evolved and is still in use today.
“The use of expert systems can enhance various areas such as coding and decision-making [with] concerns raised about the impact of these systems on the security ecosystem, including the potential for exploitation and fraud at scale.”
The XaaS business model is now an entire slice of the entire crime ecosystem: The role of XaaS (Anything as a Service) in the global crime ecosystem is significant. Criminal actors are leveraging XaaS, accelerating their activities. The use of generative AI, such as ChatGPT, provides benefits to actors in the crime ecosystem. XaaS is enabling criminals to adopt new technologies quickly and scale their attacks. Infrastructure, particularly API and LLM trust and safety, plays a crucial role and should be managed by hyperscalers. The accessibility of infrastructure lowers the barrier to entry, leading to a flood of exploitative basic exploits. However, the accuracy of high-level language models (LLMs) in understanding structural truth and timing is viewed skeptically – while the offense side of the ecosystem is experiencing an explosion due to the reduced need for accuracy. Overall, XaaS is shaping the crime ecosystem globally by enabling agility, scalability, and leveraging advanced AI technologies.
The impact of AI on the scale of future attacks?: The impact of AI on the scale of future cyber attacks was a topic of discussion. Adversaries are adopting new technologies quickly, and AI can potentially enhance their capabilities. AI systems can be misused and become a threat, alongside the adversary’s use of AI, specifically adversarial machine learning. The cloud space is forcing changes in managing infrastructure and has implications for cyber attacks. The potential impacts of AI on cyber attacks include large-scale phishing, exploiting bugs, and enabling impactful exploits. However, the skepticism about high accuracy AI models and the lack of need for accuracy in these same models on the offense side should be considered. Overall, AI has the potential to significantly impact the scale and sophistication of future cyber attacks, urging the need for robust defenses and incident plans.
Is the future of cybersecurity the same as the future of cyber war?: The future of cyber security and cyber war are intertwined. It is evident that cyber security is crucial in countering cyber warfare. While the two concepts are not exactly the same, they are closely connected – as advancements in cyber security are necessary to mitigate the risks posed by cyber war. Overall, the future of cybersecurity and cyber war will continue to evolve in tandem, with the need for effective security measures playing a vital role in addressing the challenges of cyber warfare. Threat actors are constantly adapting to new technologies and environments, leveraging disruptive technologies to achieve their goals.
Expert systems in the context of trust and security, both in computer systems and societal systems: The importance of managing trust within organizations was highlighted, especially when dealing with misinformation and disinformation. The discussion also touched upon the need for resilience in manufacturing and supply chains. Furthermore, the potential risks and challenges of specialized computing, such as error correction in Quantum arrays, were mentioned. Overall, the discussions emphasized the need to address trust, security, and adaptability in the development and deployment of expert systems.
Expert systems in the context of enhancing human capabilities: The discussion highlighted the role of expert systems in enhancing human capabilities and the potential benefits they offer. The use of expert systems can enhance various areas such as coding and decision-making. However, there were also concerns raised about the impact of these systems on the security ecosystem, including the potential for exploitation and fraud at scale. The discussion emphasized the need for managing trust and safety, and the role of infrastructure and hyperscalers in ensuring responsible use of expert systems. Overall, the discussion emphasized the importance of expert systems in augmenting human capabilities while also highlighting the need for careful management and consideration of their impact.
“What draconian measures need to be addressed? One panelists quipped: Does the U.S. need a “Great Firewall of America”?”
Infrastructure as a barrier to entry was discussed in the context of potential exploitative basic exploits (such as elder fraud at scale): These basic exploits, although not glamorous, can have a significant impact. The discussion emphasized the need for managing API and LLM trust and safety by hyperscalers to prevent a flood of such exploits. The potential for a proliferation of these exploits on the offensive side is fueled by the lack of need for accuracy in the LLMs modeled to deploy these basic exploits at an unprecedented speed and scale. The management of infrastructure, particularly in the cloud space, was highlighted as a significant factor in addressing these challenges. Use cases – such as co-pilot coding – was discussed as a low accuracy approach which could fuel these future exploits, although the fact remains that the safe space of code is smaller than that of human language. ChatGPT is noted for its high accuracy but has issues with structural truth and time relevance. For example, It can be used to target LinkedIn profiles and mimic speech patterns for email spoofing.
Microsoft (MSFT) has had some great wins for Defenders: One of the panelists did not want speak for the MSFT email team, but the impact of artificial intelligence on large-scale phishing and bug detection in various product has been felt at the company – also noting that the MSFT defense was equally as robust. Overall, if you look across all MSFT products and look at impacts, there have been great wins for defenders.
Cybersecurity, the Cloud, and the Defense Production Act: The challenges faced in managing infrastructure in the cloud and the need to consider embracing regulation in cybersecurity were discussed – with an emphasis on the importance of addressing the “Cloud problem” and U.S. policy problem in cybersecurity (including potential Defense Production Act-style moves directed at big tech companies). The thing really forcing changes is the management of infrastructure. The Cloud is a substrate that is vulnerable (with zero visibility). Should companies “bite the bullet” and go on the cloud? The question remains: How much are we really willing to enable/embrace regulation as even the .gov sector is wholesale on the cloud – and what role is lobbying playing on both sides of the argument?
A future attack on Taiwan was discussed in relation to cloud computing, U.S. policy, and the use of new technologies by attackers: There were concerns about the need for a shift in the ecosystem to address potential attacks. Developments that may be at play in the event of an attack of Taiwan:
- The impact of AI and generative;
- the future of cyber warfare and the role of expert systems;
- Infrastructure management and the XaaS model were highlighted as factors influencing the threat landscape.
Overall, the discussion was a glimpse into the various aspects of cybersecurity in the event of a future attack on Taiwan. What draconian measures need to be addressed? One panelists quipped: Does the U.S. need a “Great Firewall of America”?
“LLMs get modelled and trained – and the only thing that can attack them has to be modelled against that LLM.”
Trust plays a crucial role in discussions about the future of cybersecurity: As technological advancements and geopolitical uncertainties continue to shape the cybersecurity landscape, it is important for individuals and organizations to remain adaptable, innovative, and vigilant. Trust is necessary to foster a culture of learning, risk-taking, and problem-solving. Emphasizing the need for courage in embracing failure and encouraging risk-taking is vital for fostering innovation in cybersecurity. Additionally, trust is essential in building partnerships and collaborations within the cybersecurity community. As the future of cybersecurity involves increased specialization and the exploration of talent in unconventional places. Trust becomes even more important in selecting and working with individuals and organizations outside of traditional sources. Therefore, trust is a foundational element in shaping a better future for cybersecurity. Is the trend towards organizations making “circle of trust” decisions? One panelists joked: “I can’t wait to see what is hosted in Frankfurt for no reason.”
Know Your Customer (KYC) for the cloud is currently alot of simple sound bites and marketing efforts…none of which are deployable. It is a known fact that attackers are located on U.S. based cloud infrastructure – and it is clear that important “inside baseball” discussions are not too far off.
Are there ways to predict threats and analyze emerging threats for emerging technologies? Organizations can make circle of trust decisions and implement measures to address vulnerabilities. Microsoft, for example, has learned many lessons over 50 years and actively works with red teams and network defenders to enhance security. This is not just bug bounty programs – but an effort think about many different scenarios before a product is developed The field of cybersecurity continues to face significant risks in emerging technologies, and threat actors are adapting to new environments and adopting disruptive technologies. It is crucial to have a comprehensive understanding of the risks that lie ahead and to take action in guiding organizations through this transition. Things are not perfect but they getting better.
What is America’s defense posture? America’s defense posture encompasses various aspects such as trust, cloud vulnerability, predicting threats, and aligning defense and offense metrics. Metrics to measure defense versus offense have been misaligned for many years. The overall effort is to make defense solid state and the production of the right capabilities. Development, as a result, should be to targets that “look like this” not development based on abstract “Belfer Center”/think tank concepts only. The effort should be to build up a new model for new capabilities. It involves organizations making decisions based on trust and implementing measures to secure cloud infrastructure, predicting threats requires analyzing emerging technologies and understanding attack surfaces. The defense posture also involves lessons learned over the years, collaboration with red teams and network defenders, and continuous improvement. The panelists agreed that there are vast “unknown unknowns”. ML was not a threat in December 2022 – ChatGPT 3.5 hits and there is a whole new threat landscape emerging: Accelerating cycles of chatbots – on both sides – going at each other head to head. LLMs get modelled and trained – and the only thing that can attack them has to be modelled against that LLM. The DARPA Cyber Grand Challenge was referenced during this discussion.
Attacks on datasets based on AI/ML training have been addressed through various measures: Developers and organizations are aware of the potential threats and pitfalls. Model classifiers are used to ensure the right security for the source dataset and mitigate inside threats. OpenAI, for instance, is conscious of these challenges and aims to evaluate and tackle them sooner. Additionally, the importance of regulations and enforcement to prevent biases and ensure diverse datasets is recognized. It is noted that adversaries are also harnessing disruptive technologies, including cyber-attacks and disinformation campaigns. One panelist chimed in: “It is also a “Doom and Gloom” problem. Overall, the evaluation of sticky problems is happening sooner – which is encouraging.
“…the increasing shift towards AI-generated content has reached a tipping point where the majority of content is now being created by AI rather than humans.”
We do not see the immediate costs of Cyber Espionage – which is a problem: The cost of cyber espionage was discussed in the context of evolving threats and the need for robust defenses. The discussion highlighted the rapid adoption of new technologies by attackers and the changing approach to treating vulnerabilities. Additionally, the impact of generative AI in enhancing security workstreams was noted. The discussion also touched upon the trustworthiness of information and the significance of context in evaluating trust. Overall, the discussion emphasized the complexity of the cyber threat landscape and the importance of addressing these challenges effectively. Cyber espionage is a part of this equation. A panelists noted that “2014-2015 was an inflection point for APTs coming for a bank (Thank you North Korea). It is kind of important if someone is siphoning your organzation’s stuff – less so for LLMs.”
Security for AI or AI for Security?: This question explores the relationship between AI and security. Model drift and the increasing reliance on AI-generated content are also concerns. On the other hand, AI can be utilized for security purposes, such as in the context of managing emergent risks and identifying solutions to trust threats. Adversaries may also exploit AI-enabled robotic weapons, highlighting the need to prepare for such scenarios.
Organic data v. Synthetic data: The rise of AI-generated content and the challenges of directing bots to train on organic data are topics of concern. Directing bots to only train on organic data damn near impossible already.
We are fast approaching a “tipping point” – when non-human generated content will vastly outnumber the amount of human generated content: We are currently in an era where there is an unprecedented amount of data available. This abundance of data has far exceeded the volume of data that humans can create organically. Generative data, created by AI systems, is becoming the dominant source of content creation.
Content generation has been address – Content storage has not been addressed: Content generation has been addressed, but content storage has not been adequately addressed yet. As noted above, the increasing shift towards AI-generated content has reached a tipping point where the majority of content is now being created by AI rather than humans. However, there is a concern about the lack of attention given to content storage.
“In the tunnel” awareness?: “In the tunnel” awareness refers to the concept of understanding and being aware of activities or events that occur within a specific context or environment. It is related to the visibility and monitoring of data, content generation, and the centralization of resources and visibility. . Additionally, discussions about space situational awareness and the need for better awareness of space debris and collisions are also relevant. In the context of military and defense missions, space situational awareness is an extension of this crucial “in the tunnel” awareness discussion.
Centralization does or does not promote visibility? Centralization does promote visibility, especially in the context of inter-organizational analytics. It does allow for greater transparency and explainability in the use of AI-generated content. However, it is important to note that we are currently experiencing a paradigm shift in technology, and there is a lack of a roadmap for the future. Additionally, the impact of centralization on visibility may change with the increasing use of AI-generated content and the desire for machine learning to run “on device”. These outsized capabilities in device that we cannot track – what does this look like in three years? Therefore, while centralization can enhance visibility, it is crucial to consider the evolving landscape and these outsized capabilities of devices that may affect visibility in the coming years.
Explainability, Transparency and AI-generated watermarking: Explainability and transparency are crucial aspects in the context of AI-generated watermarking. However, the current state of AI raises concerns about the potential misuse of deep learning models, as well as the ability to track and control their outsized capabilities. While there is recognition of the need for explainable AI and the potential boost it can provide, there are also challenges in terms of trust, transparency, and the impact of government involvement. Furthermore, the increasing reliance on AI-generated content raises the issue of model drift, where a significant portion of content is already being generated by AI, leading to potential security risks – further reinforcing the importance of addressing explainability, transparency, and security in the realm of AI-generated watermarking, while also acknowledging the ongoing changes and uncertainties in this field. As one panelists note: “We have great answers for what we can see, but we are in the midst of everything changing.”
For the program notes for this conversation, see Cyber is the New Cyber – Managing Emergent Risks.
The full agenda for OODacon 2023 can be found here – Welcome to OODAcon 2023: Final Agenda and Event Details – including a a full description of each session, expanded speakers bios (with links to current projects and articles about the speakers) and additional OODA Loop resources on the theme of each panel.
OODAcon 2023: Event Summary and Imperatives For Action
Download a summary of OODAcon including useful observations to inform your strategic planning, product roadmap and drive informed customer conversations. This summary, based on the dialog during and after the event, also invites your continued input on these many dynamic trends. See: OODAcon 2023: Event Summary and Imperatives For Action.