Highlights:

  • The hacker, known as “Gloomer,” claims responsibility for the breach and shared the stolen data on Breach Forums—a notorious hacking site that the U.S. Federal Bureau of Investigation has previously attempted to shut down, most recently in May 2024.
  • The exposed data also includes file upload links to documents stored on OmniGPT’s servers, potentially containing sensitive information in PDF and document formats.

An artificial intelligence aggregator, OmniGPT Inc., faced breach. A malicious actor leaked more than 34 million lines of user conversations along with 30,000 user mails and contact numbers on a well-known hacking forum.

As an AI aggregator, OmniGPT serves as an intermediary, enabling users to access large language models from multiple providers, including OpenAI’s ChatGPT, Google LLC’s Gemini, and Anthropic PBC’s Claude, among others. This aggregation model has gained popularity among users who want to experiment with different AI models without maintaining separate subscriptions.

The hacker, known as “Gloomer,” claims responsibility for the breach and shared the stolen data on Breach Forums—a notorious hacking site that the U.S. Federal Bureau of Investigation has previously attempted to shut down, most recently in May 2024. Despite these efforts, the forum has resurfaced in various forms.

“This leak contains all messages between the users and the chatbot of this site, as well as all links to the files uploaded by users and also 30k user emails,” Gloomer wrote on the site. “You can find a lot of useful information in the messages, such as API keys and credentials. Many of the files uploaded to this site are very interesting because sometimes they contain credentials/billing information.”

The exact method of the breach remains undisclosed, but researchers at Hackread.com report that the leaked data includes user-chatbot conversations and links to uploaded files, some of which contain credentials, billing details, and API keys. Additionally, over 8,000 email addresses shared by users during chatbot interactions were found in the leak.

The exposed data also includes file upload links to documents stored on OmniGPT’s servers, potentially containing sensitive information in PDF and document formats. More significantly, these links serve as strong evidence that the data was indeed stolen from OmniGPT. The company has yet to issue a statement on the breach.

“If confirmed, this OmniGPT hack demonstrates that even practitioners experimenting with bleeding edge technology like generative AI can still get penetrated and that industry best practices around application security assessment, attestation and verification should be followed,” Andrew Bolster, Senior Research and Development Manager at application security solutions Black Duck Software Inc., reported. “But what’s potentially most harrowing to these users is the nature of the deeply private and personal ‘conversations’ they have with these chatbots; chatbots are regularly being used as ‘artificial-agony-aunts’ for intimate personal, psychological or financial questions that people are working through.”

Eric Schwake, Director of cybersecurity strategy at API security company Salt Security Inc., alerted of the risks involved, stating that “though the reported data leak involving OmniGPT awaits official confirmation, the possible exposure of user information and conversation logs — including sensitive items like API keys and credentials — highlights the urgent need for strong security measures in AI-powered platforms.”

“Should this be verified, the incident would bring to light the risks tied to the storage and processing of user data in AI interactions. Organizations creating and deploying AI chatbots must prioritize data protection throughout the entire lifecycle, ensuring secure storage, implementing access controls, utilizing strong encryption and conducting regular security evaluations,” he added.