For all the encouragement in the corporate world on how business leaders can strategically leverage generative artificial intelligence, hesitation has been growing as well, with some company leaders increasingly fearing the drawbacks of generative AI— and particularly the data security risks posed by these tools. In light of these growing concerns, some leaders might decide to ban the use of these AI tools outright. But such a reactionary approach is not necessarily the answer. The fears some executives have around data security and generative AI are far from unfounded. These tools pose three main data security risks. First, there’s the issue of unintended data sharing with AI companies. In April 2023, Writer, an enterprise generative AI platform, surveyed 66 people in director-level positions or higher at organizations with over a thousand employees. A key finding? Forty-six percent of respondents believed “someone in their company may have inadvertently shared corporate data with ChatGPT.” Information employees give AI tools, including customer data and intellectual property, could become the property of the companies behind these tools. For instance, according to legal intelligence website JD Supra, OpenAI’s terms of use “do not provide any protection for confidential information that may be input by a user into ChatGPT.” Moreover, if employees accidentally enter sensitive consumer data into ChatGPT, they may be violating their employer’s privacy policy and may trigger notification requirements under data breach laws such as CCPA or other data privacy laws such as GDPR.
Full commentary : Generative AI Poses Risks, But Outright Bans Aren’t The Best Solution.