Concept for - Concept for - Updated judicial AI guidance: transformation one prompt at a time

The Lady Chief Justice, Master of the Rolls, Senior President of Tribunals and Deputy Head of Civil Justice have produced updated guidance on Artificial Intelligence (AI) for all judges, their clerks, judicial assistants, legal advisers/offices and other support staff (“Guidance”). A link to the press release is here.  

The Guidance replaces previous guidance issued in December 2023 and reflects, among other things, the judiciary’s adoption of AI; namely, Microsoft’s ‘Copilot Chat’ (“Copilot”), which will be accessible on judicial office holders’ devices. The adoption of Copilot is a significant, and perhaps predictable, milestone for the UK’s court system. 

Streamlining time-consuming tasks

The clear benefit of this approach, as has been the case for many other industries, will be the streamlining of administrative and time-consuming tasks. The deployment of AI should release judges’ time and the time of their legal professionals to focus on more complex aspects of cases. For example, Lord Justice Birss, a Judge of the Court of Appeal, confirmed he used AI software when preparing a judgment on an area of law he was familiar with. This will inevitably trickle down to parties and their legal representatives, in saved time and increased efficiencies. 

Another addition to the Guidance relates to accountability of legal professionals and litigants for the use of AI output. Specifically, judges are to inform legal advisers and litigants of their responsibility for the accuracy of AI-generated submissions to the Courts. Legal advisors have a professional obligation to ensure that information put before a judge is accurate and appropriate – meaning that they are personally liable for material produced in their names, whether created by AI or otherwise. It therefore remains vitally important that court users check the accuracy of the information provided by AI and are aware of the potential for bias.  

Our recommendation

We recommend independently verifying the accuracy of any research or case citations produced or recommended by AI. Such AI tools can produce misleading information based on different jurisdictions, which is a danger expressly identified within the guidance, reiterating that “even if it purports to represent English Law, it may not do so”. AI may occasionally generate plausible, but erroneous, answers to prompts and has been known to produce fictitious cases/legislation or provide misleading information on the application of the law. The Guidance suggests that it may be necessary at times for the judiciary to confirm with lawyers that they have independently verified the accuracy of case law or research produced by AI tools.  

The Guidance also recognises that there is an increase in the use of AI tools by litigants in person (“LIPs”). This is understandable given that there are numerous free AI tools widely available to LIPs that can produce statements of case within seconds; which appear to be well-drafted, structured, and refer to legal arguments and case law. LIPs may not however recognise bias or misjudgements made by AI as easily. The Guidance therefore alerts judges to the use of AI by LIPs and again suggests that they (judges) make inquiries with LIPs about what checks for accuracy have been performed and inform LIPs that they are responsible for what they put to the Court. The Guidance also provides examples that judges can look for that indicate text/documents have been produced by AI.  

Most importantly, the Guidance reiterates the dangers of using publicly available AI tools and advises judges that “[a]ny information […] input into a public AI chatbot should be seen as being published to all the world”. This word of warning can apply equally to businesses; where confidential information, trade secrets or personal data are involved (see this Stephens Scown article here on using generative AI in the workplace).  

At Stephens Scown LLP we advise businesses on their AI and technology strategy as well as represent and guide businesses through the Court process. If you would like to discuss this article or how to mitigate the legal risk of using AI in your business, please do get in touch. 

This article is prepared jointly by James Gill of the Intellectual Property, Data Protection and Technology team and Isabella Kershaw of the Commercial Dispute Resolution team.