ARTIFICIAL INTELLIGENCE IN THE COURTS: SELECTED HIGHLIGHTS FROM THE JUDICIAL GUIDANCE

We have looked before at problems caused by Artificial Intelligence being used in court.   It is worthwhile looking at the Courts and Tribunals Judiciary publication “Artificial Intelligence (AI) Guidance for Judicial Office Holders. It shows some of the dangers in the use of artificial intelligence, it also shows the advantages.

 

“AI tools are now being used to produce fake material, including text, images and video. Courts and tribunals have always had to handle forgeries, and allegations of forgery, involving varying levels of sophistication. Judges should be aware of this new possibility and potential challenges posed by deepfake technology”

SOME KEY PARTS OF THE GUIDANCE

 

Guidance for responsible use of AI in Courts and
Tribunals

1) Understand AI and its applications

Before using any AI tools, ensure you have a basic understanding of their capabilities and
potential limitations.
Some key limitations:
• Public AI chatbots do not provide answers from authoritative databases. They generate new
text using an algorithm based on the prompts they receive and the data they have been
trained upon. This means the output which AI chatbots generate is what the model predicts to
be the most likely combination of words (based on the documents and data that it holds as
source information). It is not necessarily the most accurate answer.
• As with any other information available on the internet in general, AI tools may be useful to find
material you would recognise as correct but have not got to hand, but are a poor way of
conducting research to find new information you cannot verify. They may be best seen as a
way of obtaining non-definitive confirmation of something, rather than providing immediately
correct facts.
The quality of any answers you receive will depend on how you engage with the relevant AI
tool, including the nature of the prompts you enter. Even with the best prompts, the information
provided may be inaccurate, incomplete, misleading, or biased.
The currently available LLMs appear to have been trained on material published on the
internet. Their “view” of the law is often based heavily on US law although some do purport to
be able to distinguish between that and English law.
2) Uphold confidentiality and privacy
Do not enter any information into a public AI chatbot that is not already in the public domain. Do
not enter information which is private or confidential. Any information that you input into a public AI
chatbot should be seen as being published to all the world.
The current publicly available AI chatbots remember every question that you ask them, as well as
any other information you put into them. That information is then available to be used to respond
to queries from other users. As a result, anything you type into it could become publicly known.
You should disable the chat history in AI chatbots if this option is available. This option is currently
available in ChatGPT and Google Bard but not yet in Bing Chat.
Be aware that some AI platforms, particularly if used as an App on a smartphone, may request
various permissions which give them access to information on your device. In those
circumstances you should refuse all such permissions.
In the event of unintentional disclosure of confidential or private information you should contact
your leadership judge and the Judicial Office. If the disclosed information includes ppersonal data
the disclosure should reported as a data incident. Details of how to report a data incident to
Judicial Office can be found at this link: Judicial Intranet | Data breach notification form for the
judiciary1

In future AI tools designed for use in the courts and tribunals may become available but, until that
happens, you should treat all AI tools as being capable of making public anything entered into
them.

3) Ensure accountability and accuracy

The accuracy of any information you have been provided by an AI tool must be checked before it
is used or relied upon.
Information provided by AI tools may be inaccurate, incomplete, misleading or out of date. Even if
it purports to represent English law, it may not do so.
AI tools may:
• make up fictitious cases, citations or quotes, or refer to legislation, articles or legal texts that do
not exist
• provide incorrect or misleading information regarding the law or how it might apply, and
• make factual errors.

4) Be aware of bias

AI tools based on LLMs generate responses based on the dataset they are trained upon.
Information generated by AI will inevitably reflect errors and biases in its training data.
You should always have regard to this possibility and the need to correct this. You may be
particularly assisted by reference to the Equal Treatment Bench Book.

 

7) Be aware that court/tribunal users may have used AI tools

Some kinds of AI tools have been used by legal professionals for a significant time without
difficulty. For example, TAR is now part of the landscape of approaches to electronic disclosure.
Leaving aside the law in particular, many aspects of AI are already in general use for example in
search engines to auto-fill questions, in social media to select content to be delivered, and in
image recognition and predictive text.
All legal representatives are responsible for the material they put before the court/tribunal and
have a professional obligation to ensure it is accurate and appropriate. Provided AI is used
responsibly, there is no reason why a legal representative ought to refer to its use, but this is
dependent upon context.
Until the legal profession becomes familiar with these new technologies, however, it may be
necessary at times to remind individual lawyers of their obligations and confirm that they have
independently verified the accuracy of any research or case citations that have been generated
with the assistance of an AI chatbot.
AI chatbots are now being used by unrepresented litigants. They may be the only source of advice
or assistance some litigants receive. Litigants rarely have the skills independently to verify legal
information provided by AI chatbots and may not be aware that they are prone to error. If it
appears an AI chatbot may have been used to prepare submissions or other documents, it is
appropriate to inquire about this, and ask what checks for accuracy have been undertaken (if
any). Examples of indications that text has been produced this way are shown below.

AI tools are now being used to produce fake material, including text, images and video. Courts
and tribunals have always had to handle forgeries, and allegations of forgery, involving varying
levels of sophistication. Judges should be aware of this new possibility and potential challenges
posed by deepfake technology

 

Examples: Potential uses and risks of Generative AI in
Courts and Tribunals

Potentially useful tasks
• AI tools are capable of summarising large bodies of text. As with any summary, care needs to
be taken to ensure the summary is accurate.
• AI tools can be used in writing presentations, e.g. to provide suggestions for topics to cover.
• Administrative tasks like composing emails and memoranda can be performed by AI.
Tasks not recommended
• Legal research: AI tools are a poor way of conducting research to find new information you
cannot verify independently. They may be useful as a way to be reminded of material you
would recognise as correct.
• Legal analysis: the current public AI chatbots do not produce convincing analysis or reasoning.
Indications that work may have been produced by AI:
• references to cases that do not sound familiar, or have unfamiliar citations (sometimes from
the US)
• parties citing different bodies of case law in relation to the same legal issues
• submissions that do not accord with your general understanding of the law in the area
• submissions that use American spelling or refer to overseas cases, and
• content that (superficially at least) appears to be highly persuasive and well written, but on
closer inspection contains obvious substantive errors