Microsoft raced to put generative AI at the heart of its systems. Ask a question about an upcoming meeting and the company’s Copilot AI system can pull answers from your emails, Teams chats, and files—a potential productivity boon. But these exact processes can also be abused by hackers. Today at the Black Hat security conference in Las Vegas, researcher Michael Bargury is demonstrating five proof-of-concept ways that Copilot, which runs on its Microsoft 365 apps, such as Word, can be manipulated by malicious attackers, including using it to provide false references to files, exfiltrate some private data, and dodge Microsoft’s security protections. One of the most alarming displays, arguably, is Bargury’s ability to turn the AI into an automatic spear-phishing machine. Dubbed LOLCopilot, the red-teaming code Bargury created can—crucially, once a hacker has access to someone’s work email—use Copilot to see who you email regularly, draft a message mimicking your writing style (including emoji use), and send a personalized blast that can include a malicious link or attached malware. “I can do this with everyone you have ever spoken to, and I can send hundreds of emails on your behalf,” says Bargury, the cofounder and CTO of security company Zenity, who published his findings alongside videos showing how Copilot could be abused. “A hacker would spend days crafting the right email to get you to click on it, but they can generate hundreds of these emails in a few minutes.” That demonstration, as with other attacks created by Bargury, broadly works by using the large language model (LLM) as designed: typing written questions to access data the AI can retrieve. However, it can produce malicious results by including additional data or instructions to perform certain actions. The research highlights some of the challenges of connecting AI systems to corporate data and what can happen when “untrusted” outside data is thrown into the mix—particularly when the AI answers with what could look like legitimate results.
About OODA Analyst
OODA is comprised of a unique team of international experts capable of providing advanced intelligence and analysis, strategy and planning support, risk and threat management, training, decision support, crisis response, and security services to global corporations and governments.