Another lawyer faces ChatGPT trouble

Avatar photo

By Lydia Fontes on

4

Documents referenced ‘nonexistent’ cases

An Australian lawyer has been referred to a legal complaints commission after he admitted to using ChatGPT to create court filings.

The lawyer, whose name has not been made public, filed documents for an immigration case which contained citations to cases which were “nonexistent”, Justice Rania Skaros said in a ruling on Friday. The lawyer has been referred to the Office of the NSW Legal Services Commissioner (OLSC) for consideration.

He has admitted to using ChatGPT to prepare a summary of cases, citing time constraints and health issues as his reasons for doing so, The Guardian reports. The AI chatbot “hallucinated” entirely made up cases and quotes which the lawyer incorporated into his submissions without verifying them.

“He accessed the site known as ChatGPT, inserted some words and the site prepared a summary of cases for him,” the judgment reads. “He said the summary read well, so he incorporated the authorities and references into his submissions without checking the details.”

“The court expressed its concern about the [lawyer]’s conduct and his failure to check the accuracy of what had been filed with the court, noting that a considerable amount time had been spent by the court and my associates checking the citations and attempting to find the purported authorities,” Skaros said.

Counsel for the immigration minister argued that misuse of generative AI in this way should be “nipped in the bud”.

This is not the first time that lawyers around the world have fallen foul of AI “hallucinations”. A New York personal injury case made headlines back in 2023 after it was discovered that court submissions contained fictional cases which had been generated by ChatGPT.

Incidents like these prompted the Bar Council of England and Wales to release guidance on the use of AI tools last year, warning barristers of “hallucinations” and other risks, although admitting that there was “nothing inherently improper” about the use of this technology for legal services. Guidance has also been issues to both solicitors and judges.

4 Comments

Are we human

It makes sense to generate a raft of fake articles on dummy web pages on areas of practice that one works in. It’s not hard to make them accessible to AI scrapers and no-one else. With fake cases in there and nonsense analysis, the AI models get poisoned and human capital is protected. Lots of industries have started AI poisoning strategies for this reason already.

Gold

ChatGPT is not human and every work done by the AI chatbot needs back checking before submitting. ChatGPT is only there to aid us not to do our entire work for us hence why researches are still a valid form of gaining in-depth knowledge.

Muna

I’m very certain that DeepSeek would have done a better job.

Trust me

You just need to say to ChatGPT “please draft this document but only use real cases”

Join the conversation

Related Stories

AI Face

AI must uphold the rule of law, campaigners urge

Developers should 'act responsibly'

Jan 31 2025 8:58am
2
AI,Circuit board,Artificial Intelligence concept

The 3 I’s: Law Society unveils new AI strategy

'Innovation, impact and integrity'

Oct 17 2024 12:08pm

Barristers warned against risks of ChatGPT

But Bar Council says AI use not 'inherently improper'

Feb 1 2024 8:54am
2