The UK High Court is cracking down on lawyers misusing AI after fake legal citations surfaced in major cases. Dozens of bogus case-law references, likely AI-generated, were submitted to courts, threatening justice and public trust.
In a £89m damages case against Qatar National Bank, claimants cited 45 cases—18 completely fake. Many quotes were fabricated. The claimant admitted using public AI tools, and his solicitor acknowledged citing sham authorities.
Another case saw Haringey Law Centre blame the London borough of Haringey for failing to provide housing. Its lawyer cited fake precedent five times. The council’s solicitor kept asking why none could be found. Court ruled the law centre and a pupil barrister negligent, ordering costs. The barrister denied purposeful AI use but admitted she might have “inadvertently” relied on AI summaries via Google or Safari.
Dame Victoria Sharp, president of the King’s Bench Division, warned this misuse “has serious implications for the administration of justice and public confidence.” She urged the Bar Council and Law Society to act “as a matter of urgency” and warned lawyers face sanctions ranging from public rebuke to contempt proceedings.
Dame Sharp wrote:
“Such tools can produce apparently coherent and plausible responses to prompts, but those coherent and plausible responses may turn out to be entirely incorrect.”
“The responses may make confident assertions that are simply untrue. They may cite sources that do not exist. They may purport to quote passages from a genuine source that do not appear in that source.”
Ian Jeffery, CEO of the Law Society of England and Wales, commented:
“Artificial intelligence tools are increasingly used to support legal service delivery.”
“However, the real risk of incorrect outputs produced by generative AI requires lawyers to check, review and ensure the accuracy of their work.”
This isn’t isolated. In 2023, a UK tax tribunal saw nine fake tribunal decisions linked to possible ChatGPT use. A Danish €5.8m case nearly triggered contempt proceedings for relying on a fake ruling. A 2023 US federal case turned chaotic when lawyers cited seven fictitious cases, later exposed as AI fabrications. They were fined $5,000 after the judge dismissed their ChatGPT summaries as “gibberish.”
The court’s regulatory ruling is a blunt reminder: AI can’t replace rigorous fact-checking in law – or lawyers risk sanctions and a hit to their credibility.
Read the full regulatory ruling PDF here.