Lawyers Face 'Severe' Penalties for Fake AI-Generated Citations, UK Court Warns

Lawyers Face 'Severe' Penalties for Fake AI-Generated Citations, UK Court Warns

The High Court of England and Wales has issued a stern warning to lawyers about the misuse of artificial intelligence in their legal work. In a recent ruling, Judge Victoria Sharp highlighted the dangers of relying on generative AI tools like ChatGPT for legal research, stating that these tools are "not capable of conducting reliable legal research."

Judge Sharp emphasized that while AI can produce responses that appear coherent and plausible, these responses may be entirely incorrect and make confident assertions that are simply untrue. Lawyers, therefore, have a professional duty to verify the accuracy of AI-generated research against authoritative sources before using it in their work.

The ruling comes in light of two recent cases where lawyers submitted court filings with fake citations. In one case, a lawyer cited 45 cases, 18 of which did not exist, and others were misrepresented. In another case, a lawyer cited five non-existent cases, though the lawyer denied using AI, attributing the errors to AI-generated summaries found online.

Judge Sharp's ruling will be forwarded to professional bodies such as the Bar Council and the Law Society to ensure compliance with legal duties. She warned that lawyers who fail to meet their professional obligations risk severe sanctions, including public admonition, costs, contempt proceedings, or even referral to the police.

This ruling underscores the growing concern over the use of AI in legal practice and the need for stringent checks to prevent the dissemination of false information in legal proceedings.