Site icon Legal Cheek

What lawyers can learn from pilots about using AI

Solicitor Shane Hughes looks at the EU’s Artificial Intelligence Act and how the legal industry can learn from the aviation sector

Aeroplane cockpit
A lawyer and a pilot walk into a bar. No, not a questionable joke, but an opportunity to learn how firms and clients can overcome the risk of human complacency when using artificial intelligence (AI).

After circling in the skies above Brussels, the EU’s Artificial Intelligence Act was published on 12 July 2024 and entered into force on 1 August 2024. Its arrival as the world adopts generative AI is almost timely. The Act provides a framework to regulate the design, development, and deployment of AI systems in the EU.  It applies to anyone selling or using AI systems in the EU, including AI systems located outside of the EU if the system’s outputs are used in the EU. The EU’s approach recognises the ease with which AI may be accessed and the sector agnostic risks it poses.

As lawyers and clients begin using AI privately and professionally, we can all benefit from understanding how an industry with a long history of innovation has successfully managed the human factors associated with automation. The aviation sector, for example, has long been a pioneer in addressing the risks associated with technology and human complacency. Examples of how the sector mitigates such risks include:

1. Continuous training: Pilots are required to engage in regular training and simulations.  This keeps their skills sharp and minds engaged, even when autopilot is activated and doing much of the work.

2. Mandatory checklists: Before every flight disembarks, pilots and crew follow a strict set of checklists to ensure all systems are functioning correctly and routine tasks completed. Doing so mitigates the risk of oversight due to monotony or overreliance on technology.

3. Crew resource management: This practice encourages all team members to contribute to decision-making processes, ensuring that the captain’s and others’ decisions are vetted by another individual. “Crew: cross-check doors for departure” is a familiar phrase many of us know and cherish. It also illustrates how embedding a culture intended to mitigate the risk of human complacency extends to all levels of seniority in the aviation industry.

4. System configuration: Pilots announce and cross-check any inputs, mode changes, or other amendments to aircrafts’ systems. This practice ensures both pilots are in the loop, able to check the accuracy of the changes, and can ensure the aircraft is operating as expected.

5. Remote monitoring: Suitably qualified engineers monitor from the ground the performance of aircrafts’ systems enabling potential problems to be identified independently and communicated promptly to pilots in the air.

6. Rest requirements: Regulations mandate rest periods for pilots and crew to prevent fatigue. Fatigue can easily lead to complacency and errors in judgement.

APPLY NOW: AI, risk and regulation — with Clyde & Co

The legal industry and clients alike stand to benefit by adopting similar controls to ensure their personnel remain vigilant when using AI, including generative AI.  Examples include:

AI education: while people engage routinely in continuous learning and professional development activities, training materials should be updated at pace to address the inherent and residual risks in using AI. This is vital to ensure people of all levels of seniority are informed of the risks, prepared to work alongside AI, and armed with the skills to critically assess AI-generated or augmented content.

Standard procedures and checklists: Similar to pre-flight checklists, implementing standard procedures for AI use can help to ensure all necessary precautions are taken and that people maintain a high level of scrutiny to prevent overreliance on AI outputs.

Peer review systems and team-based decision making: Encouraging a culture of peer or team review can help catch potential errors, oversights, or hallucinations in AI-generated or augmented outputs.

System monitoring: Just as aircraft systems are monitored remotely, lawyers’ and their clients’ Risk, Compliance, and Surveillance teams should consider including AI systems in their monitoring and quality assurance programmes to detect promptly inappropriate activities or erroneous outputs.

Wellness initiatives: Similar to how pilot and crew rest is crucial, law firms and their clients should consider doubling down on their efforts to encourage well-being. Burn out can lead to a drop in vigilance in all circumstances, including when working with AI. Organisations should take steps to ensure their people are well-rested and alert when interacting with AI.

As AI becomes more prevalent and widely accessible, it is imperative that the legal industry learns from sectors like aviation that have extensive experience in managing the risk of human complacency when interacting with technology. By adopting aviation’s best practices, the legal profession can also look to mitigate the risk of complacency and ensure that AI augments people’s skills and expertise.

Whether it’s a lawyer or a pilot, the key to delivering a successful outcome lies not only in the tools we use, but in how we use them. Few of us know how to fly an aircraft, but we board flights eagerly and regard them as one of the safest forms of transport. Why? This is likely due to a combination of regulation and safety record. While the EU AI Act has landed, humanity’s next challenge lies in determining how to mitigate the risk of human complacency when a significant number of people have access to such advanced technology. Lawyers: arm pens and cross-check. The seat belt sign is on, but are you ready for take off?

Shane Hughes is a senior solicitor of England and Wales specialising in white collar investigations, AI law, risk management, and corporate governance.

Exit mobile version