The High Court of England and Wales is cracking down on lawyers misusing AI in legal research.
Judge Victoria Sharp ruled that generative AI tools like ChatGPT are “not capable of conducting reliable legal research.”
She warned that these AI responses “may turn out to be entirely incorrect” and “make confident assertions that are simply untrue.”
Lawyers can still use AI, but they must verify accuracy with authoritative sources before relying on it professionally.
The issue blew up after two cases exposed fake legal citations—one filing had 18 nonexistent cases out of 45 citations. Another cited five cases that didn’t exist at all.
Judge Sharp flagged that lawyers citing AI-generated falsehoods have become too common. She’s sending the ruling to the Bar Council and Law Society to tighten guidance.
Lawyers who ignore this face serious sanctions, including public admonitions, cost penalties, contempt proceedings, or police referral.
Judge Victoria Sharp stated:
“Such tools can produce apparently coherent and plausible responses to prompts, but those coherent and plausible responses may turn out to be entirely incorrect.”
“The responses may make confident assertions that are simply untrue.”
“Lawyers who do not comply with their professional obligations in this respect risk severe sanction.”
The ruling underlines the rising headaches AI is causing in legal circles and sets a tough standard for vetting AI research.