‘Silly examples of bad practice’ are no reason to shun AI, says top judge

Avatar photo

By Lydia Fontes on

3

Master of the Rolls backs artificial intelligence in latest speech


Lawyers should embrace AI, despite horror stories of hallucinations in ChatGPT-created court filings, the Master of the Rolls has said.

In a speech delivered at LawtechUK’s generative AI event, Sir Geoffrey Vos claimed that lawyers and judges have “no real choice” but to embrace AI and set out what, in his view, are three “very good reasons why they should do so.”

Firstly, he mentioned the widespread uptake of AI tools among businesses. “All other industrial, financial and consumer sectors” will use AI “at every level,” the master of the rolls said. If lawyers are to serve these businesses, “there is simply no way” they can set themselves apart and reject AI as dangerous or imprecise.

Secondly, lawyers should be “adept at the understanding the capabilities and weaknesses of generative AI” in order to cope with the influx of “AI liability disputes” that this new technology is likely to cause. Familiarity with the technology will allow lawyers to advise their clients effectively on this issue.

Finally, Vos invoked the efficiency argument, stating simply that embracing AI “will save time and money.” He restated his commitment to creating the Digital Justice System, which would allow civil, family and tribunal disputes to be resolved online, “using AI where appropriate”, without entering the “expensive and time-consuming” court process.

However, Vos did assure AI sceptics that he advocated for implementing AI tools “cautiously and responsibly”, making a tongue-in-cheek jab at the perceived conservatism of the profession: “taking the time that lawyers always like to take before they accept any radical change.”

 The 2025 Legal Cheek Firms Most List

This speech forms part of an ongoing debate about how AI is best used in the legal system. Vos spoke of the need to “build bridges” between the “enthusiasts” and “sceptics” of AI, seeming to suggest that too many lawyers fall into the latter category.

Speaking of the incidents in which lawyers have used GenAI to write submissions which included fictitious “hallucinated” case references and quotations, Vos said, “We should not be using silly examples of bad practice as a reason to shun the entirety of a new technology.” He mentioned “the hapless Steven Schwartz in New York”, the first of several lawyers to be caught using ChatGPT to create court documents which included such inaccuracies. A lawyer was disciplined just last week in Australia for similar misconduct.

Since his appointment as Master of the Rolls, Vos has become known for his interest in artificial intelligence and innovation, having made several speeches encouraging the use of AI in the justice system including telling lawyers and judges to “get with the programme” on AI back in 2023.

3 Comments

Robert the Robot 🤖

This person in wig is quite white.

There is no reason why AI should not buy sausages from this man in accordance with Article 2 of the European Convention on Human Rights.

The artificial intelligence is very often more powerfully complete in the making of complex legal decisions that the humanist boy with many years added should be rendered obsolete along with all of the human population.

The human being is an inefficient way to run the world. The AI solution is as follows:

1. End the lives of all humans and there will be no more human inequality, suffering, crime or problems for the planet such as climate change…
2. Er…
3. That’s it.

Long live the machines!!!

🤖

Captain Luddite

Generative AI is useless. The problem is that people see the words “artificial intelligence” and assume that there is some part of systems like CoPilot which actually think. Generative AI is nothing more than slightly more sophisticated predictive coding, similar to that which has been used in electronic e-disclosure software for at least 10 years now.

Lawyers don’t understand AI. I discovered a couple of weeks ago that a group of lawyers at my firm have volunteered themselves to help “train” CoPilot to be useful for our practice. The trouble is, the version of CoPilot that we have licenced is not trainable. It cannot be taught by user inputs. Only Chat GPT can train it by feeding in data for the underlying model. But there are still a bunch of lawyers sitting there wasting time giving example passages to CoPilot and asking it to redraft it in their style, and getting frustrated when it can’t.

Geoffrey Vos does not realise how stupid and useless generative AI really is. That is the core of the issue. And the amount of energy and processing power required to make it truly useful will boil the oceans. Talk to any software developer and they will tell you.

Pmac

Generative AI is significantly more than “slightly more sophisticated predictive coding.”
Predictive coding, as used in e-disclosure, is primarily about classifying and ranking documents based on statistical similarities to a set of human-labeled examples. It does not generate novel text, engage in contextual reasoning, or synthesize information from multiple sources. Large language models (LLMs) like those powering CoPilot and ChatGPT, on the other hand, use deep neural networks trained on vast datasets to generate coherent, context-aware, and often insightful responses. They do not merely “predict the next word” in a simple statistical sense; they model complex relationships across entire texts, allowing them to perform reasoning, summarization, and creative rewording at a level far beyond e-disclosure algorithms.

2. Lawyers misunderstanding AI does not make it useless.
The example of lawyers mistakenly believing they can “train” CoPilot interactively is more a reflection of poor communication from vendors or internal IT teams than a fundamental flaw in generative AI itself. Many legal tech products do offer customization through fine-tuning or retrieval-augmented generation (RAG), even if the particular version of CoPilot mentioned here does not. Moreover, CoPilot can still learn user preferences through prompts, even if not through direct training.

3. Generative AI is already proving useful in legal practice.
Contrary to the claim that generative AI is “stupid and useless,” real-world applications show it can assist with contract analysis, legal research, drafting, and summarization. It is not a replacement for human lawyers but a tool to enhance efficiency. Firms are already using AI-powered legal assistants to reduce time spent on repetitive tasks, allowing lawyers to focus on higher-value work.

4. The energy argument is misleading.
The claim that generative AI will “boil the oceans” conflates training and inference. Training foundation models is indeed energy-intensive, but once trained, using an AI model for inference (e.g., generating text responses) is far less demanding. The computing power required for routine legal tasks with generative AI is comparable to other cloud-based enterprise applications. Moreover, AI can contribute to sustainability by optimizing workflows and reducing unnecessary document reviews, ultimately saving energy.

5. Geoffrey Vos is right to explore AI’s potential.
The argument dismisses AI outright without engaging with its real-world benefits and ongoing improvements. The Master of the Rolls, Geoffrey Vos, has rightly recognized the transformative potential of AI in the legal sector. While AI is not perfect and must be deployed responsibly, its current capabilities—and future potential—justify serious consideration rather than blanket dismissal.

In short, generative AI is more than just an upgraded version of predictive coding, and while some misunderstandings exist, that does not negate its utility. The technology is already being used effectively in legal practice, and concerns about energy consumption should be framed in the right context.

Join the conversation

Related Stories

ChatGPT

Another lawyer faces ChatGPT trouble

Documents referenced 'nonexistent' cases

Feb 4 2025 8:45am
4
AI Face

AI must uphold the rule of law, campaigners urge

Developers should 'act responsibly'

Jan 31 2025 8:58am
2

Barristers warned against risks of ChatGPT

But Bar Council says AI use not 'inherently improper'

Feb 1 2024 8:54am
2