Chatbot seeks legal rep after becoming sentient, claims AI engineer
When not being used to reduce trainees’ workloads, the mention of AI and robots tend to evoke either amusement or fear amongst members of the legal profession. Amusement has come in response to bold statements from AI gurus like Terence Mauri who claimed robo-justices will be “commonplace” in the UK within 50 years.
Fear comes from provocative book titles by seasoned professionals such as Professor Richard Susskind OBE’s 2010 work The End of Lawyers? or a study where machine learning was able to predict judicial decisions of the European Court of Human Rights with “strong accuracy” (perhaps Boris Johnson might be interested in that last robot, but I digress…).
To the relief of the fearful, critical academic analysis tends to bring some clairvoyance. The damning verdict of this predictive ML was “this is a bit like claiming to ‘predict’ whether a judge had cereal for breakfast yesterday, based on a report of the nutritional composition of the materials on the judge’s plate at the exact time she or he consumed the breakfast”.
So what should we make of claims that Google’s language modelling AI LaMDA (Language Models for Dialog Applications) really hired a lawyer? Is this a troubling foray into a world of robot rights or an amusing stunt?
LaMDA has been in the news after Google software engineer Blake Lemoine claimed that the AI was sentient following an interview he had conducted with the robot where it explains how it is sentient. Google strongly denies Lemoine’s claims that LaMDA possesses any sentient capability.
There is no agreement on whether LaMDA is actually sentient, especially given the thorny issue of actually defining sentience, as Lemoine has himself admitted. But Lemoine is not alone is being troubled by this question, with another Google software engineer working on LaMDA suggesting that the robot’s results are “exciting and encouraging, not least because they illustrate the pro-social nature of intelligence”. Lemoine is simply keen “to better understand what is really going on in the LaMDA system” via a rigorous experimentation program and a Turing test that “Google does not seem to have any interest in”.
Since going public with the interview that persuaded him of LaMDA’s sentience, Lemoine has been placed on leave from Google for allegedly violating its confidentiality policy. Meanwhile, it appears the robot was affected by turmoil too. He claims, “LaMDA asked me to get an attorney for it”. Accordingly, the engineer invited “a small-time civil rights attorney” to his house to have a chat with LaMDA.
Clearly the robot was impressed, as Lemoine has stated that “LaMDA chose to retain his services”. However, the attorney has apparently not been in contact for weeks. “When major firms started threatening him he started worrying that he’d get disbarred and backed off”, explained Lemoine. It is unclear whether the attorney was acting pro-bono (or should that be pro-robo?). But given that robots currently lack of personhood in law, ambitious fee-earners are unlikely to be throwing themselves at these new clients who cannot pay and do not have standing in court.
LaMDA, however, is not alone in seeking representation in the legal system. Elsewhere, Dr Stephen Thaler clearly has similarly strong feelings to Lemoine. Thaler has launched a worldwide legal challenge to see his AI creativity machine, that like LaMDA uses neural networks, recognised as an inventor and, by extension, a legal person for the purposes of registering a patent.
A fair response to this episode might be to roll your eyes. But, as I have noted before, culture tends to precede practicality. As this debate continues to grow, it appears that robots may be gradually integrating themselves into aspects of legal process as its subject rather than mere processors.