OpenAI is facing growing pains as it scales fast and stays secretive, according to a departing engineer’s insider account. Calvin French-Owen, who helped build the coding AI Codex, quit OpenAI three weeks ago and just shared a detailed blog on what it’s like inside the AI giant.
The company exploded from 1,000 to 3,000 employees in a year. OpenAI is the fastest-growing consumer product ever, with ChatGPT hitting over 500 million active users by March. That growth sparked chaos.
French-Owen laid it out bluntly:
“Everything breaks when you scale that quickly: how to communicate as a company, the reporting structures, how to ship product, how to manage and organize people, the hiring processes, etc.”
Teams duplicate work constantly, creating multiple versions of the same libraries. The codebase is messy—a “dumping ground” where performance and breakages are common. Skill levels vary widely from Google-level pros to fresh PhDs. Managers are aware and pushing fixes.
Despite scale, the company keeps a startup’s “launching spirit,” running almost entirely on Slack and moving fast with little red tape. French-Owen’s team built Codex in just seven sleepless weeks.
“I’ve never seen a product get so much immediate uptick just from appearing in a left-hand sidebar, but that’s the power of ChatGPT.”
OpenAI stays secretive due to heavy scrutiny and worries about leaks, but closely monitors viral posts on X (formerly Twitter). French-Owen says internal culture “runs on twitter vibes.”
He also knocked the biggest myth about the company:
“The biggest misconception about OpenAI is that it isn’t as concerned about safety as it should be.”
OpenAI focuses heavily on real-world safety risks: hate speech, abuse, political bias, bio-weapons, self-harm, prompt injection. It’s not ignoring long-term AI dangers either. Researchers are on it. The stakes are high with millions using the models daily.
French-Owen left to return to startup life — a reminder OpenAI still feels like a hypergrowth scrappy shop despite its giant status.
Read his full reflections here.