AI must uphold the rule of law, campaigners urge

Avatar photo

By Lydia Fontes on

1

Developers should ‘act responsibly’

AI Face

The campaign group JUSTICE has proposed the first “rights-based framework” to guide Artificial Intelligence (AI) use across the justice system, arguing that AI users and developers should be obliged to “act responsibly”.

The report, entitled ‘AI in our justice system’, asserts that “attempts to improve the system through reforms and innovations, should have the core tenants of the rule of law and human rights embedded in their strategy, policy, design and development.” To achieve this, JUSTICE puts forward two requirements.

The first requires AI developers to be “goal-led”, ensuring their innovations are “targeted at genuine use cases which can help deliver better outcomes”. AI tools should be developed with the justice system’s “core goals” in mind, those being “equal and effective access to justice, fair and lawful decision-making and openness to scrutiny.”

 The 2025 Legal Cheek Firms Most List

The second requirement is the “duty to act responsibly”. This would oblige “all those involved in the deployment of AI within the justice system” to “ensure that the core features of the rule of law and human rights are embedded at each stage.”

The report covers the benefits that AI tools could bring to the justice system, including easing the workload of “overburdened” courts, giving decision-makers access to “data-derived insights”, helping the police investigate online criminal activity as well as “combating bias”.

However, JUSTICE warns against “over-reliance” on AI systems, claiming that treating AI generated results as “fully accurate and certain” can lead to “adverse outcomes”. This follows the news that the Ministry of Justice is reconsidering their approach to computer evidence in the criminal justice system in response to the Post Office Inquiry, which revealed that a computer error led to 900 incorrect prosecutions against Post Office staff.

Sophia Adams Bhatti, report co-author and Chair of JUSTICE’s AI programme, acknowledged AI’s potential to solve some of the justice system’s issues. However, she said the technology, “equally has the potential, as we have already seen, to cause significant harms”. She recommends that the justice system approaches AI opportunities “with clear expectations of what good looks like, what outcomes we are seeking, the risks we are willing to take as society, and the red lines we want to put in place.”

1 Comment

Mr Gorgewell

What happens when AI decides that it makes economic sense to execute serious offenders rather than keep them locked up for decades?

We’re entering a dystopian future by allowing AI into law.

You have been warned!

Join the conversation

Related Stories

AI,Circuit board,Artificial Intelligence concept

The 3 I’s: Law Society unveils new AI strategy

'Innovation, impact and integrity'

Oct 17 2024 12:08pm

Judges encouraged to embrace AI — carefully

What could possibly go wrong?

Dec 12 2023 2:33pm