AI in law: Embracing change or facing extinction?

Avatar photo

By James Southwick on

James Southwick, bar course graduate, analyses how can we expect the legal profession to change with developments to legal technology and what this means for the future

AI head with question mark
In one of the oldest professions in the world built upon hundreds of years of tradition, change can be a slow process. But in the 21st century, when change is more rapid than ever, the legal profession needs to catch up.

Current reasons for a lack of change vary; for those charging on an hourly basis, using technology that reduces those hours is hardly an incentive! For others with well-established processes, the effort of adopting a new system is just not worth it. But legal technology (also known as legaltech, or sometimes lawtech) is constantly changing and staying ahead means understanding what is available now, what will be available, and what to expect in the future.

In today’s current environment, legal technology can perform a host of tasks only dreamt of by professionals in the past. Thomson Reuters, just one company providing these services, offers legal research platforms with comprehensive collections of the law, drafting technology, evidence management and due diligence tools, to name but a few. Using this technology in the workplace can result in greater productivity for individuals, improved client service and optimised workflow. But what is the next step for the legal profession? Following its recent boom, everyone is looking towards artificial intelligence (AI).

What is AI?

AI is a term originally coined in 1955 by John McCarthy, a pioneer in the AI field. At is core, AI is technology that allows computers and machines to simulate human intelligence and problem-solving capabilities. Since 1955, firms and individuals have developed AI to become more efficient at their role and its use in everyday life may go unnoticed: digital assistants on phones, GPS, self-driving cars and even Roombas (autonomous robotic vacuums) all use AI to perform their tasks.

Today, the focus is on a new type of AI: generative AI. Generative AI differs from the AI found in applications like Siri, as it creates content rather than being coded for a specific purpose. The most commonly known generative AI at the moment is OpenAI’s ChatGPT. The functions of ChatGPT are seemingly endless, from solving math problems step-by-step, giving relationship advice, writing original poetry and helping people prepare for job interviews. Most importantly, AI can learn. With ChatGPT as an example, using publicly available information from the internet, information licensed from third parties and information human trainers provide, it learns about associations between words and, with reinforcement learning from human feedback, gains further information on how to generate human-like responses. So as AI is used more, its application and quality will only improve and it is already seeing uses within the legal profession.

AI and the legal profession

Legal technology is already adopting AI to improve the effectiveness of its services. Thomson Reuter’s CoCounsel utilises generative AI which can analyse documents, summarise data and answer legal questions. Meanwhile, members of the judiciary have adopted it into their practice, most notably Lord Justice Birss who confirmed he used ChatGPT to provide a summary of an area of law which was used in his final judgment.

Like legal technology, using AI in practice can mean greater efficiency, improved client service and optimised workflow. But there are also a number of risks which must be carefully considered. With generative AI, there is a risk of prompts returning fabricated information. This has already been seen in New York in the case of Mata v Avianca 22-cv-1461(PKC), where two lawyers were fined for using six cases that were generated by ChatGPT. In the UK the case Felicity Harber v The Commissioners for His Majesty’s Revenue and Customs [2023] UKFTT 1007 (TC) a litigant in person sought to rely on nine cases ‘hallucinated’ by ChatGPT, which included American spelling and repeated identical phrases. Both of these cases ultimately resulted in wasting court time and costs, emphasising the importance of the user taking full responsibility to validate the accuracy of any material generated by AI which they seek to rely on. This is especially true when acknowledging AI models such as ChatGPT gather most of their information from US data, without any ability to distinguish between UK case law.

Want to write for the Legal Cheek Journal?

Find out more

AI has also seen uses in ‘deepfake’ technology: digitally manipulated media which can create or alter files to look or sound like another person. This includes photographs, videos or audio clips and its use has already been seen in UK courts. For example, in 2019 a mother produced an audio file of the father making threats towards her, but upon further analysis, it was discovered that the file had been manipulated to include words not used by the father. It is a stark reminder that, where possible, legal professionals should seek the original files and analyse metadata to determine who has accessed the file and if any changes have been made.

Understanding the benefits and risks of AI is imperative to guarantee its proper application in practice, thereby ensuring that AI is used an as assistive tool, rather than a replacement for the legal professional.

The future of AI

So where does the future of AI lie in the legal profession? The reality is that AI has already taken more jobs than it has created. It is predicted that in the next 20 years around 114,000 legal jobs will be automated, impacted by continuously improving AI which can provide results in less time and with tangible costs benefits. Clients will expect a reduction in costs too. As automation increases, billable hours will decrease. It is very likely that in the future the profession will see a move away from billable hours into fixed fees.

One developing area utilising AI that is likely to have a huge impact is predictive legal technology, or computational law. Using AI, large databases of data can be analysed to identify patterns of activity which are predictive of future behaviour. This extends to objective analysis of judicial proceedings to predict legal outcomes. In 2014 an algorithm was used to predict the outcomes of historical US Supreme Court verdicts, using only data available before the decisions. The programme predicted over 69% of decisions correctly, higher than expert panel prediction rates. Development of this technology may result in the ability to determine the success rate for prospective litigants, but its implementation may negatively impact those with an apparent ‘low success potential’, with the decision to take instructions decided purely by a computer programme.

However, adoption of AI may take longer than expected. In 2021, a report by the University of Oxford for the Solicitor’s Regulation Authority (SRA) found that 32.8% of 891 firms reported that they were not using legal technology, with no plan on using it. When asked about the use of data analytics with AI, 84.6% said they were not planning to use it. Firms that are slow to adopt using technology in their practice may lose out on competition that can offer faster and cheaper legal services, without compromising on quality.

Understanding the future means understanding how AI is going to change the profession and adapting to change, rather than resisting it. But this shouldn’t be blind acceptance. Careful consideration and weighting should be given to the benefits and risks of using AI in practice. With the correct use, it can be a tool to build and improve upon a high-quality practice that is beneficial to both the professional and the client.

James Southwick is a bar course graduate, currently seeking pupillage in the criminal field. He has a strong interest in developing areas of law such as blockchain and cybersecurity.

Join the conversation

Related Stories

AI and the rise of ‘music laundering’

LPC student Frederick Gummer analyses the legal implications of artificial intelligence on the music industry

Apr 29 2024 8:04am

The Willy Wonka experience: navigating misrepresentation in the age of AI

Strathclyde law student Emma Campbell explores how regulation can hep protect consumers

Jul 29 2024 7:54am

Navigating bias in generative AI

Nottingham PPE student Charlie Downey looks at the challenges around artificial intelligence

Sep 11 2023 9:22am
2