On one side, the rise of SaaS LLMs (ChatGPT, GPT-4, Bing with AI, Bard) makes this a third-party risk management problem for security teams. And that’s great news, because it’s rare that third parties lead to breaches … ahem. Hope you caught the sarcasm there. Security pros should expect their company to buy — or your existing vendors to integrate with — generalized models from big players such as Microsoft, Anthropic, Google, and more. Short blog, problem solved, right? Well … no. While the hype certainly makes it seem like this is where all the action is, there’s another major problem for security leaders and their teams. Fine-tuned models are where your sensitive and confidential data is most at risk. Your internal teams will build and customize fine-tuned models using corporate data that security teams are responsible and accountable for protecting. Unfortunately, the time horizon for this is not so much “soon” as it is “yesterday.” Forrester expects fine tuned-models to proliferate across enterprises, devices, and individuals, which will need protection. You can’t read a blog about generative AI and large language models (LLMs) without a mention of the leaked Google document, so here’s an obligatory link to “We have no moat, and neither does OpenAI.”
Full commentary L How To Defend Your AI Models.