As Baldur Bjarnason’s new book eloquently explains, the concept of “poisoning” AI models brings to light the rising challenges in the realm of AI ethics and security. This complex conversation has become both daunting and intriguing, stirring the murky waters of technological evolution and its subsequent ethical conundrums. In a broader sense, the prospect of ChatGPT or any AI being compromised or “broken” by someone may sound like a cyber thriller’s plot. However, it’s essential to treat this not as a hypothetical Armageddon, but as a powerful call to action to refine and secure an evolving technology. I’d like to think of Tuesday’s call by OpenAI and others that “mitigating the risk of extinction from AI should be a global priority” as a sign that the industry seeks to work with all stakeholders to make AI’s promise a reality – and that they recognize that we need to look at AI in the same light as we do nuclear wars and pandemics. The crux of these conversations revolve around a core understanding: AI models, much like any software, are susceptible to exploitation. The current predicament brings to mind the early days of the internet when cyber vulnerabilities were rampant and security strategies were in their infancy.
Full opinion : We need to refine and secure AI, not turn our backs on the technology.