Scale AI is facing backlash after reports revealed serious security flaws with its use of public Google Docs for sensitive AI training data.
The issue started when Business Insider found thousands of confidential work documents left accessible via public links. These files exposed projects for Google, Meta, and Elon Musk’s xAI, including entitled documents on Meta’s AI speech standards and Google’s Bard chatbot fixes.
Soon after, it emerged that Scale also left contractor info publicly available—private emails, work quality ratings, cheating suspicions, and payment details were all exposed. Some docs could even be edited by anyone with the link.
The launch follows Meta’s $14.3 billion investment in Scale AI. Yet, after these leaks surfaced, major clients including Google, OpenAI, and xAI paused work with the startup.
Contractors described the system as “incredibly janky,” with public Google Docs widely used for sharing work to speed up access, ignoring cybersecurity risks.
Cybersecurity experts told BI this setup risks social engineering attacks and malware uploads.
Scale responded with this statement:
"We are conducting a thorough investigation and have disabled any user’s ability to publicly share documents from Scale-managed systems," a Scale AI spokesperson said.
"We remain committed to robust technical and policy safeguards to protect confidential information and are always working to strengthen our practices."
Meta declined to comment. Google and xAI did not respond.
The leak raises questions if Meta was aware of these risks before investing and whether Scale has done enough to secure its AI training pipeline under growing scrutiny.
Read the full Business Insider report here.