CIOs and CISOs have long grappled with the challenge of shadow IT—technology that is being used within an enterprise but that is not officially sanctioned by the IT or security department. According to Gartner research, 41% of employees acquired, modified, or created technology outside of IT’s visibility in 2022, and that number was expected to climb to 75% by 2027. Shadow IT can introduce a whole host of security impacts, for one primary reason: You can’t protect what you don’t know about. Not surprisingly, we are seeing a similar phenomenon with AI tools. Employees are increasingly experimenting with the likes of ChatGPT and Google Bard to do their jobs. And while that experimentation and creativity can be a good thing, the problem is that these tools are being used without IT or security’s knowledge. This leads to the challenge CISOs and other leaders face: How do you enable employees to use their preferred AI tools while also mitigating potential risks to the organization and ensuring they don’t create cybersecurity nightmares? It’s little wonder that employees want to be using generative AI, machine learning, and large language models. These technologies bring multiple benefits, including the potential to significantly improve process efficiencies, personal productivity, and even customer engagement relatively quickly.
Full opinion : How to get a handle on shadow AI.