Unfamiliar case names and US spellings among key giveaways
Judicial guidance on the use of AI in courts was updated this week, outlining common pitfalls, recommended practices for available tools, and a glossary of key terms.
The new release updates guidance from 2023, applying to all judicial office holders – court clerks, support staff, court of appeal judges, legal assistants, and more – and was published online to promote “open justice and public confidence”.
The document outlines key signs that a party may have used AI—such as cases that “do not sound familiar” or include “unfamiliar citations (sometimes from the US)”; parties “citing different bodies of case law” on the same issues; and submissions that use American spelling, reference overseas cases, or “do not accord” with judges’ understanding of the law.
Perhaps most interesting is the final indicator: “content that (superficially at least) appears to be highly persuasive and well written, but on closer inspection contains obvious substantive errors.”
Nevertheless, the guidance notes there is no reason AI couldn’t be a “potentially useful” tool, and judges won’t be required to disclose if they’ve used it. Depending on the context, lawyers may not need to either, “provided AI is used responsibly.” However, “it may be necessary… that lawyers are reminded of their ‘obligations’” and confirm they have independently verified the accuracy of any material generated with AI assistance.
The reference to litigants-in-person (LiPs) is slightly different:
“[AI] may be the only source of advice or assistance some litigants receive. Litigants rarely have the skills independently to verify legal information provided by AI chatbots and may not be aware that they are prone to error. If it appears an AI chatbot may have been used to prepare submissions or other documents, it is appropriate to inquire about this, ask what checks for accuracy have been undertaken (if any), and inform the litigant that they are responsible for what they put to the court/tribunal.”
“Fake material” is also discussed: “Judges should be aware of this new possibility and potential challenges posed by deepfake technology”, as well as potentially unintentional forgeries (“hallucinations”) — like citations or quotes from “fictitious” cases, legislation, or legal texts.
Another section advises against legal research and analysis, noting AI often bases its legal “view” on US law. “Anything you type into it could become publicly known”, reads the guidance, advising chat histories to be turned off and to deny mobile app permissions. If something private and confidential is uploaded, judicial office holders are to treat it as a data breach.
Separately, the guidance reveals that Microsoft’s AI tool, Copilot, is now available on judges’ computers. While it does not explicitly encourage its use, it does describe the tool as “secure”. Sir Geoffrey Vos, Master of the Rolls, has been positive about AI in the past, despite some “silly” examples of bad practice.
This comes with a glossary to help judges and staff navigate the techy jargon.
This comes amidst a rise in AI use by both lawyers and LIPs. On the lawyers side, examples range from Shoosmiths’ £1 million bonus pot for using AI to disbarred US lawyers. Among LiPs, one barrister has warned of risks, while a US pro se litigant employed an AI-generated avatar of counsel.