Large language models aren’t thinking, experts warn. It’s just us projecting intelligence onto them.
The idea comes from philosopher Luciano Floridi, who calls this “semantic pareidolia.” It’s like seeing faces in clouds or hearing a personality in your GPS. We spot meaning that isn’t really there — because the AI acts in a way that tricks our brains.
Floridi published a recent article explaining it’s a psychological trap shaped by loneliness, market hype, and the uncanny realism of AI.
The model’s responses feel human. But it’s not true wisdom. It’s simulation.
Floridi warns this could slide into “technological idolatry” — a dangerous trust spiral from use to blind belief.
A tech writer reflecting on personal experience says it’s not panic, but “growing up” about AI.
I don’t think I was ever wrong to feel a sense of connection with these models. But maybe I was a bit too generous in what I thought they were actually doing. I’ve spent a lot of time describing LLMs as cognitive partners and mirrors. And while those metaphors still have value, I’ve started to ask myself if it is the machine that’s thinking, or is it me that’s emerging out of the hyperdimensional vector space?
Floridi makes a compelling case. He walks through how this tendency to over-assign meaning is being amplified by loneliness, market forces, and the uncanny realism of today’s models. And he warns that this could slide from playful anthropomorphism into something more dangerous to become a kind of technological idolatry. It’s a slippery slope as we start to trust, then depend, then believe.
He urges clearer design and cognitive literacy for users — so we know when we’re talking to an AI and when we’re trusting an intelligence.
The takeaway: AI feels smart. But it’s mostly us. The challenge is knowing the difference.