It’s been an eventful week for AI startup Anthropic, creator of the Claude family of large language models (LLMs) and associated chatbots. The company says that on Monday, January 22nd, it became aware that a contractor inadvertently sent a file containing non-sensitive customer information to a third party. The file detailed a “subset” of customer names, as well as open credit balances as of the end of 2023. “Our investigation shows this was an isolated incident caused by human error — not a breach of Anthropic systems,” an Anthropic spokesperson told VentureBeat. “We have notified affected customers and provided them with the relevant guidance.” The finding came just before the Federal Trade Commission (FTC), the U.S. agency in charge of regulating market competition, announced it was investigating Anthropic’s strategic partnerships with Amazon and Google — as well as those of rival OpenAI with its backer Microsoft. Anthropic’s spokesperson emphasized that the breach is in no way related to the FTC probe, on which they declined to comment. The PC-centric news outlet Windows Report recently got ahold of and posted a screenshot of an email sent by Anthropic to customers acknowledging the leak of their information by one of its third-party contractors. The information leaked included the “account name….accounts receivable information as of December 31, 2023” for customers.
Full story : Anthropic confirms it suffered a data leak due to human error, says it had nothing do with the ongoing FTC probe.