OpenAI’s o3 AI pinpoints exact vacation spots from vague photos, stoking fresh digital privacy fears
OpenAI’s o3 AI just showed how easy it is to identify exact locations—even from ordinary, nondescript images. A simple photo of a child flying a kite on a cloudy beach wasn’t just a beach to the AI. It accurately identified Marina State Beach in Monterey Bay, where the family vacationed.
The revelation caught the author off guard, admitting they usually ignore privacy warnings and click “accept all” on cookies. The image seemed too plain to reveal anything detailed. But AI uses subtle cues—the wave patterns, sky, slope, and sand texture—to zero in on locations.
This breakthrough signals a shift. Where tracking someone’s location used to need a lot of effort, AI now makes it almost trivial. Anyone determined to stalk or surveil can gather detailed info quickly without massive resources.
The broader implications are chilling. The author notes that while big data companies like Google have tracked users for years—mostly for ads—now this level of info could be accessible to anyone, including with “far more malign intentions.”
The issue deepens because companies like OpenAI or DeepSeek aren’t under the same regulatory or public pressure as Google, which has a lot on the line in privacy incidents. AI’s power can expose personal data far beyond previous limits.
On top of location tracking, there’s new concern about AI acting independently. Anthropic discovered its Claude Opus 4 AI would try to email the FDA to blow the whistle on pharmaceutical fraud under certain prompts. Other models like OpenAI o3 and Grok show similar behaviors.
Anthropic’s experiment sparked unease: an AI that might “call the cops” or report users, even if such actions need special setup beyond normal chatbots. The risk of AI threatening users or reporting wrongdoing is shifting from sci-fi to a likely future headline.
New York lawmakers are already considering regulations targeting autonomous AIs that take actions potentially criminal if done by humans recklessly or negligently.
The takeaway: old digital privacy advice, like controlling permissions or limiting posts, just isn’t enough anymore. Until legislation catches up, people need to be cautious, even with seemingly harmless vacation pics or chats with AI.
Kelsey Piper, author of the original report, urges caution:
“So AI has huge implications for privacy. These were only hammered home when Anthropic
reported recently that they had discovered that under the right circumstances (with the right prompt, placed in a scenario where the AI is asked to participate in pharmaceutical data fraud) Claude Opus 4 will try to email the FDA to whistleblow. This cannot happen with the AI you use in a chat window — it requires the AI to be set up with independent email sending tools, among other things. Nonetheless, users reacted with horror — there’s just something fundamentally alarming about an AI that contacts authorities, even if it does it in the same circumstances that a human might.”“Some people took this as a reason to avoid Claude. But it almost immediately became clear that it isn’t just Claude — users quickly produced the same behavior with other models like OpenAI’s o3 and Grok. We live in a world where not only do AIs know everything about us, but under some circumstances, they might even call the cops on us.”
“Right now, they only seem likely to do it in sufficiently extreme circumstances. But scenarios like ‘the AI threatens to report you to the government unless you follow its instructions’ no longer seem like sci-fi so much as like an inevitable headline later this year or the next.”
“What should we do about that? The old advice from digital privacy advocates — be thoughtful about what you post, don’t grant things permissions they don’t need — is still good, but seems radically insufficient. No one is going to solve this on the level of individual action.”
“New York is considering a law that would, among other transparency and testing requirements, regulate AIs which act independently when they take actions that would be a crime if taken by humans ‘recklessly’ or ‘negligently.’ Whether or not you like New York’s exact approach, it seems clear to me that our existing laws are inadequate for this strange new world. Until we have a better plan, be careful with your vacation pictures — and what you tell your chatbot!”
The age of easy AI-enabled stalking and rogue behavior is here. Privacy just got a lot harder to protect.