Puja Patel, University of Cambridge law graduate, offers an analysis into whether the UK’s current anti-discrimination laws are fit for purpose in the wake of AI
Imagine if popular BBC TV series, The Apprentice, had a robot instead of Lord Sugar sitting in the boardroom, pointing the finger and saying ‘you’re fired.’ Seems ridiculous, doesn’t it?
Whilst robots may not be the ones to point the finger, more and more important workplace decisions are being made by artificial intelligence (‘AI’) in a process called algorithmic decision-making (‘ADM’). Indeed, 68% of large UK companies had adopted at least one form of AI by January 2022 and as of April 2023, 92% of UK employers aim to increase their use of AI in HR within the next 12-18 months.
Put simply, ADM works as follows: the AI system is fed vast amount of data sets (‘training data’) upon which it models its perception of the world by drawing correlations between data sets and outcomes. These correlations then inform decisions made by the algorithm.
At first glance, this seems like the antithesis of prejudice. Surely a ‘neutral’ algorithm which relies only upon data would not discriminate against individuals?
Sadly, it would. Like an avid football fan who notices that England only scores when they are in the bathroom and subsequently selflessly spends every match on the toilet, ADM frequently conflates correlation with causation. Whilst a human being would recognise that criteria such as your favourite colour or your race are discriminatory and irrelevant to the question of recruitment, an algorithm would not. Therefore, whilst algorithms do not directly discriminate in the same way that a prejudiced human would, they frequently perpetrate indirect discrimination.
Unfortunately, this has already occurred in real life — both Amazon and Uber have famously faced backlash for their allegedly indirectly discriminatory algorithms. According to a Reuters report, members of Amazon’s team disclosed that Amazon’s recruitment algorithm (which has since been removed from Amazon’s recruitment processes) taught itself that male candidates were preferable. The algorithm’s training data, according to the Reuters report, comprised of resumes submitted to Amazon over a 10-year period, most of whom were men; accordingly, the algorithm drew a correlation between male CVs and successful candidates and so filtered CVs that contained the word ‘women’ out of the recruitment process. The Reuters report states that Amazon did not respond to these claims, other than to say that the tool ‘was never used by Amazon recruiters to evaluate candidates’, although Amazon did not deny that recruiters looked at the algorithm’s recommendations.
Want to write for the Legal Cheek Journal?
Find out moreSimilarly, Uber’s use of Microsoft’s facial recognition algorithm to ID drivers allegedly failed to recognise approximately 20% of darker-skinned female faces and 5% of darker-skinned male faces, according to IWGB union research, resulting in the alleged deactivation of these drivers’ accounts and the beginning of a lawsuit which will unfold in UK courts over the months to come. Microsoft declined to comment on ongoing legal proceedings whilst Uber says that their algorithm is subject to ‘robust human review’.
Would UK anti-discrimination law protect you?
Section 19 of the Equality Act (‘EA’) 2010 governs indirect discrimination law. In simple terms, s.19 EA means that it is illegal for workplaces to implement universal policies which seem neutral but in reality disadvantage a certain protected group.
For example, if a workplace wanted to ban employees from wearing headgear, this would disadvantage Muslim, Jewish and Sikh employees, even though the ban applied to everyone – this would therefore be indirectly discriminatory, and unless the workplace could prove this was a proportionate means of achieving a legitimate aim, they would be in breach of s.19 EA.
But here’s the catch. The EA only applies to claimants from a ‘protected group’, which is an exhaustive list set out at s.4 EA: age, disability, gender reassignment, marriage and civil partnership, pregnancy and maternity, race, religion or belief, sex and sexual orientation.
The Amazon and Uber claimants fall into the protected categories of ‘sex’ and ‘race’ respectively. Therefore, the EA will protect them – in theory. In reality, it is very difficult to succeed in a claim against AI, as the claimants are required by the EA to causally connect the criteria applied by the algorithm with the subsequent disadvantage (e.g. being fired). It is often impossible for claimants to ascertain the exact criteria applied by the algorithm; even in the unlikely event that the employer assists, the employer themselves is rarely able to access this information. Indeed, the many correlations algorithms draw between vast data sets mean that an algorithm’s inner workings are akin to an ‘artificial neural network’. Therefore, even protected group claimants will struggle to access the EA’s protection in the context of ADM.
Claimants who are discriminated against for the possession of intersectional protected characteristics (e.g. for being an Indian woman) are not protected as claimants must prove that the discrimination occurred due to one protected characteristic alone (e.g. solely due to either being Indian or a woman). ‘Intersectional groups’ are therefore insufficiently protected despite being doubly at risk of discrimination.
And what about the people whom are randomly and opaquely grouped together by the algorithm? If the algorithm draws a correlation between blonde employees and high performance scores, and subsequently recommends that non-blonde employees are not promoted, how are these non-blonde claimants to be protected? ‘Hair colour’ is not a protected characteristic listed in s.4 EA.
And perhaps most worryingly of all — what about those individuals who do not know they have been discriminated against by targeted advertising? If a company uses AI for online advertising of a STEM job, the algorithm is more likely to show the advert to men than women. A key problem arises — women cannot know about an advert they have never seen. Even if they find out, they are highly unlikely to collect enough data to prove group disadvantage, as required by s.19 EA.
So, ultimately – no, the EA is unlikely to protect you.
Looking to the future
It is therefore evident that specific AI legislation is needed — and fast. Despite this, the UK Government’s AI White Paper confirms that they currently have no intention of enacting AI-specific legislation. This is extremely worrying; the UK Government’s desire to facilitate AI innovation unencumbered by regulation is unspeakably destructive to our fundamental rights. It is to be hoped that, following in the footsteps of the EU AI Act and pursuant to the recommendations of a Private Member’s Bill, Parliament will be inclined to at least adopt a ‘sliding-scale approach’ whereby high-risk uses of AI (e.g. dismissals) will entail heavier regulation, and low-risk uses of AI (e.g. choosing locations for meetings with clients) will attract lower regulation. This approach would safeguard fundamental rights without sacrificing AI innovation.
Puja Patel is a law graduate from the University of Cambridge and has completed her LPC LLM. She is soon to start a training contract at Penningtons Manches Cooper’s London office.