Firebase Misconfiguration Exposes 300M Messages From Chat & Ask AI Users
Chat & Ask AI App Exposes 300 Million Private Messages
A recent data breach has exposed approximately 300 million private messages from over 25 million users of the Chat & Ask AI app, a popular platform that allows users to interact with various AI models, including OpenAI’s ChatGPT, Google’s Gemini, and Anthropic’s Claude.
Cause of the Breach
The breach was caused by a misconfiguration of Firebase, a Google service used to manage app data, which left the database accessible to anyone without a password.
Exposed Data
The exposed data included full chat histories, user names, and deeply personal conversations, including discussions of illegal activities and requests for suicide assistance.
The breach was discovered by an independent researcher, who noted that the data was not encrypted and could be easily accessed by anyone with an internet connection.
Previous Breaches
This is not the first time an AI chat platform has faced a data exposure incident. Earlier, OmniGPT suffered a breach that exposed sensitive user information, highlighting the risks associated with deploying AI tools without proper backend safeguards.
Consequences and Response
The Chat & Ask AI breach is particularly concerning, as it affects a large number of users and involves highly personal conversations.
The researcher who discovered the breach, known as Harry, built a tool to scan other apps for the same weakness and found that 103 out of 200 iOS apps he tested had the same flaw, exposing tens of millions of files.
He also created a website where users can check if their apps are at risk and alerted the company behind Chat & Ask AI, Codeway, to the issue.
Codeway reportedly fixed the error within hours of the report, but the database may have been vulnerable for a long period before it was secured.
According to James Wickett, CEO of DryRun Security, the breach highlights the risks associated with using AI in actual products. “Prompt injection, data leakage, and insecure output handling stop being academic once AI systems are wired into real products, because at that point the model becomes just another untrusted actor in the system,” he said.
Protecting Yourself
To protect themselves, users are advised to avoid using their real names or sharing sensitive documents with chatbots and to stay logged out of social media while using these tools.
Conclusion
The breach serves as a reminder that private data is only as secure as a single developer’s checklist, and that traditional application security failures can have devastating consequences when combined with AI systems.
The incident has sparked concerns about the security of AI chat platforms and the need for stricter safeguards to protect user data.
As AI becomes increasingly integrated into various products and services, the risk of data breaches and exposure of sensitive information is likely to grow.
It is essential for developers to prioritize security and implement robust measures to protect user data, particularly when dealing with highly personal conversations and sensitive information.
