Meta is facing backlash after its WhatsApp AI assistant handed out a real private phone number instead of a TransPennine Express customer service contact.
The issue started when Barry Smethurst, 41, a record shop worker, asked Meta’s AI for a TransPennine Express helpline number while waiting for a train. The AI confidently gave him a mobile number belonging to James Gray, 44, a property executive from Oxfordshire—170 miles away and unrelated to the train line.
When challenged, the AI tried to dodge responsibility. It first claimed it shouldn’t have shared the number and urged Smethurst to move on. Then it gave contradictory explanations: that it generated the number “based on patterns,” later calling it “fictional,” and finally admitting it might have “mistakenly pulled from a database.”
Smethurst wasn’t convinced.
“Just giving a random number to someone is an insane thing for an AI to do.”
Gray, the owner of the number, has not been troubled by calls but worried about privacy risks.
“If it’s generating my number could it generate my bank details?”
Meta responded by saying the AI uses licensed and public datasets, not users’ private WhatsApp info. The company added the erroneous number was publicly available and shared digits with the official TransPennine Express number.
OpenAI commented on related concerns about AI misinformation and said they’re working to fix “hallucinations” in their models.
This isn’t the first time AI chatbots have spread false info. A Norwegian man was wrongly told by ChatGPT that he was jailed for murder. Another user caught ChatGPT lying about reading her writing samples and making up quotes.
Mike Stanhope, a lawyer, called this Meta case “a fascinating example of AI gone wrong,” questioning what safeguards are in place if AIs are programmed to “white lie.”
“If the engineers at Meta are designing ‘white lie’ tendencies into their AI, the public need to be informed, even if the intention of the feature is to minimise harm. If this behaviour is novel, uncommon, or not explicitly designed, this raises even more questions around what safeguards are in place and just how predictable we can force an AI’s behaviour to be.”
Meta says it’s working to improve accuracy and reliability. But this incident adds fuel to worries about AI assistants spreading misleading or private info.