250 students head to Clifford Chance’s Canary Wharf office to find out
As the GDPR mega-wave settles, technological development is back on the agenda for law firms and their clients. Sporting their best suits, 250 students arrived at Clifford Chance’s London office earlier this month to explore the impact artificial intelligence (AI) is having on the legal profession, at Legal Cheek’s first student event of the autumn.
When you think of a tech company, you might picture one of the big five — Apple, Facebook, Amazon, Alphabet or good ol’ Microsoft. But there are new kids appearing on the block. With their share prices in danger of lagging behind those of the tech glitterati, non-tech companies are starting to undergo a digital makeover.
We are entering a new world where Deutsche Bank is no longer just a bank — “every discussion we have now is about technology”. Where JP Morgan no longer identifies as a financial institution — “we are a technology company”. And where Jaguar no longer think they’re a car company — “we are a technology company that makes cars”, they remind us.
Identity crisis aside, “this is what a lot of our clients and businesses are thinking”, said Clifford Chance associate Jamie Andrew, a technology, media & telecommunications (TMT) specialist and member of the firm’s global, cross-practice Tech Group. Technological transformation is “relevant to every client and every sector”, he added. This commercial evolution, driven by AI, data analytics and machine-learning, will change the way future lawyers work. Boardrooms are “obsessed with technology opportunity and risk” and clients are increasingly approaching Clifford Chance for guidance.
“Fundamentally, AI is great”, said Andrew, “it’s cheaper, quicker, better, less troublesome… and you might get better results since AI directors don’t care about bonuses”. But regulating AI raises questions for which there are no textbook answers. In effect, AI means “delegating control to autonomous entities to impact the world”, explained Andrew. Can we create ‘robotic liability’ for when things go wrong? There’s no disincentive to stop harm from happening when AI ‘directors’ don’t have legal personality, nor do they fear fines or criminal prosecution like companies do.
Jamie Andrew, Clifford Chance associate, on how to establish liability for AI behaviour
AI behaviour – who's responsible?
Posted by Legal Cheek on Wednesday, 26 September 2018
Technology is always ahead of the law and regulation is trying to catch up. The House of Lords select committees are concerned about international agreement on an ethical framework for tech development. This is an issue — we have a dual mandate of mitigating risk and enabling innovation: “We don’t want to be constrained by excessive regulation, which doesn’t let us innovate and allows competitors in the US and China to overtake us… It’s a real international concern — we don’t want to be left behind”, explained Andrew.
But what is AI? At its core, “AI is software that is capable of writing its own software”, explained Leigh Smith, a senior associate in the megafirm’s intellectual property (IP) team. Smith gave the example of the Chinese AlphaGo game — “we’re not talking about Google writing a piece of software that knows how to play Go, we’re talking about Google writing a piece of software, that writes a piece of software, that learns how to play Go. You end up with a separate, independent work”, he said.
This raises new questions for IP lawyers. Provided it meets the threshold of originality, software is protected by copyright as a literary work. But as AI makes computer-generated work more independent, it becomes trickier to identify its author — is it the software developer, or the user? And since software doesn’t have legal personality, does that mean that anything created by AI may not be capable of patent protection?
Who owns AI generated software? ?
Posted by Legal Cheek on Thursday, 27 September 2018
There’s no explicit requirement in The Patents Act 1977 that the inventor of a patent must be a human, Smith explained. But often we don’t know how AI arrives at its decisions, and if you can’t describe how a patent works, you’re not entitled to grant of that patent. These are big questions for the future.
For now, the City giant is heavily investing in AI-driven technologies. “AI is a way in which we can change the way we deliver legal advice”, said the firm’s director of continuous improvement Tom Slate. He explained:
“First things first, why do we have this thing called AI? Clifford Chance have an improvement strategy that we call ‘Best Delivery’, which is about making sure we deliver an outstanding client service on every matter, every time… that is really important for a firm like ours and frankly any firm that rivals us. Because if we don’t continue to improve the way we deliver our services and make them better for our clients and better for our people… then we will go out of business.”
Tom Slate says AI will change how legal advice is given
How Clifford Chance Graduates UK is using AI to stay one step ahead of the game ?
Posted by Legal Cheek on Tuesday, 25 September 2018
That’s unlikely to happen any time soon — not since Kira joined the magic circle’s team. Kira, the software that uses AI to search and analyse text contained within contracts by reviewing billions of terabytes of data, is an example of the kind of tool that lawyers use at Clifford Chance. As it dives into thousands of documents to extract values and identify clauses, Slate demonstrated how Kira works.
While in Amsterdam, a litigator at the firm was given less than a week to review 10,000 documents. As if by waving a magic wand, the AI technology retrieved the subset of documents relevant to his case, saving him hours of review time. Use of Kira creates substantial costs savings for clients as well as the firm and enables Clifford Chancers to spend less time flicking through reams of paper, and more time on value-added work.
So, AI will have an impact on the number of documents that lawyers review in the future. With the increasing number of documents in circulation, costs will grow exponentially, explained Slate, whereas with AI the increase of cost will be marginal. For this reason, clients will press for increased use of AI while they enjoy better value for their money.
A new wave of AI is on the horizon and “people are working on it now”, said Slate. AI is expected to continue to evolve and begin to understand natural language and interpret what is actually in the content of legal documents. Slate said this is “terrifying for lawyers because it effectively means a machine will start to practice law itself”.
AI’s impact on the workforce could be profound in low sill industries. You may have seen ‘Flippy’, the AI burger flipper. Brought to the world in California, Flippy’s job was to flip burgers on the grill. His colleagues couldn’t keep up with his perfectly done, medium rare patties. Amazon’s factories are just as automated, and we can begin to see why AI is so attractive for employers: “Robots are stronger, more accurate, they don’t need to sleep, there’s no minimum wage requirement, and there’s no risk of them forming a trade union” said TMT associate Midori Takenaka.
Clifford Chance associate Midori Takenaka on the ethical debate surrounding the use of AI
Is it ethical to use AI? ?
Posted by Legal Cheek on Monday, 1 October 2018
As Terminator’s face loomed over the auditorium from the projector, Takenaka reminded us that AI comes at a social cost. You might think killer robots are unrealistic, safely limited to the realms of sci-fi, but Takenaka suggested it’s “not unrealistic that we have killer robots — we already have those — it’s completely unrealistic that humans can put up any resistance” to them.
Use of AI tech in automated weaponry is just one of many difficult areas. Earlier this year, 3,000 Google employees wrote an open letter protesting against the use of Google’s AI tech in Pentagon defence missions and automated drone strikes. Takenaka stressed that AI as a whole “throws up some really sticky, thorny questions that we as a society are having to grapple with now, as we think about how we can successfully integrate AI into our every day lives in an ethical and moral way”.
This isn’t “wishy washy stuff,” she added, “we already see questions from clients about this — recently we advised a client and wrote an AI and ethics framework for them, which is something they’re rolling out in their business to inform how they use AI on a day-to-day basis”.
Whether AI turns out to be “the best, or worst thing, ever to happen to humanity” like Stephen Hawking imagined, there’s no doubt that it will have a substantial impact on millennial trainees, just as the advent of the email was transformative for the generations of lawyers before them. As Smith says:
“The biggest impact it’ll have on you is the shape of your training, because the way you’re going to learn to be a lawyer, particularly in a firm like this, is that you’ll be reading a lot of contracts… a lot of contracts. And eventually someone might let you have a go at drafting one of those contracts. Now if an AI system is going to take that first stage for you, the way you learn how to produce those agreements and how you interact with clients will change.”
One way to get ahead now is to “be curious, ask questions, select information that’s out there, and be proactive” advised Andrew. Break down what AI is and “get exposure”, said Smith, “you don’t need to code, but be able to have a semi-intelligent conversation with a coder”. Takenaka’s advice: “Don’t be scared to be a nerd and you will go far”.
About Legal Cheek Careers posts.