GUIDANCE FOR THOSE USING ARTIFICIAL INTELLIGENCE TO CARRY OUT LEGAL RESEARCH: THERE IS AN OBLIGATION NOT TO ADVANCE SUBMISSIONS BASED ON “FAKE” AUTHORITIES…
There appears to be many hundreds (possibly thousands) of cases throughout the world where litigants (and often their lawyers) have relied on “hallucinated” cases, or real cases which do not, in fact, contain the quotations relied on or support the argument being put forward. These issues are being addressed in courts in many jurisdictions. We have an example here from the Court of Appeal in Ireland. It provides useful guidance and discussion (but remember this is not a judgment that is binding in the courts of England and Wales).
“Parties are entitled to use AI to assist in carrying out research in respect of their case provided that they do so responsibly and do not, even inadvertently, mislead the court by advancing propositions or relying upon supposed authorities which in fact have no foundation at all and are simply hallucinations.
… In all cases where they do so, they should expressly inform both the other parties and the court of their use of AI in this regard.”
KEY PRACTICE POINT
This judgment reflects the reality that AI is likely to be used in legal research but also points out the dangers of this. It shows that, in the Irish Courts, a party must disclose when AI is used. It also shows the difficulties and additional expense that “hallucinations” can cause.
THE CASE
Guerin -v- O’Doherty [2026]IECA 48.
THE FACTS
The Court was giving a judgment dismissing the defendant’s appeal in a defamation case. The defendant was now acting as a litigant in person. She put in submissions which had “hallucinated” cases in them.
THE COURT’S JUDGMENT ON THE USE OF ARTIFICIAL INTELLIGENCE IN PROCEEDINGS.
Use of Artificial Intelligence in Proceedings
72. The defendant used AI to prepare the written submissions, which she filed in support
of her appeal. Unfortunately, and perhaps as was to be expected, given the nature of large
word model Artificial Intelligence systems, the submissions included references to authorities
which simply did not exist. These were hallucinations, so-called, generated by the AI system.
There were no such cases, and they were not authority for the propositions which they
purported to establish. This is an inherent and well-known risk of using AI to write legal
submissions. The defendant did not apparently verify the existence of the authorities she
cited, or that the cases relied upon actually supported the propositions advanced. Neither did
she notify the solicitors for the plaintiff that she had prepared her submissions with the
assistance of AI. Counsel for the plaintiff informed the Court that this added to their work in
requiring them fruitlessly to attempt to locate the hallucinated authorities and thereby
needlessly added to the costs of the plaintiff in the appeal.
73. Parties, whether represented or not, have an obligation not to mislead the court, which
includes the obligation not to rely upon or advance submissions based upon “fake” authorities
or propositions which have no basis in law. In addition, lawyers are subject to professional
and ethical obligations in relation to the use of AI when practicing their profession which do
not apply to a litigant in person. I do not propose to address these in this judgment as they do
not arise in this case.
74. I am concerned that parties, including litigants in person, should use AI appropriately
and should be given guidance as to how they may properly use AI in their litigation. I would
therefore set out the following principles of general application:
(i) Parties are entitled to use AI to assist in carrying out research in respect of their case
provided that they do so responsibly and do not, even inadvertently, mislead the court
by advancing propositions or relying upon supposed authorities which in fact have no
foundation at all and are simply hallucinations.
(ii) In all cases where they do so, they should expressly inform both the other parties and
the court of their use of AI in this regard.
(iii) A self-represented party is responsible for the ultimate written or oral work in their
case just as much as the lawyers representing parties are.
(iv) It is important therefore that any party who uses AI as part of their research
independently verifies the accuracy of their submissions and the authorities cited as
supposedly establishing the propositions advanced.
(v) No authority should be cited by a party who has not actually verified that it is a
genuine judgment of the court and that it is – or at least arguably is – authority for the
proposition contended for.
75. It is not acceptable for parties uncritically to provide submissions which purport to
rely on cases or propositions the existence of which they have not verified. It leads to
completely wasted time and costs. It casts an unfair burden on the opposing party in their
preparation of their response to the submission or in preparing the books of appeal. It
potentially brings the administration of justice into disrepute and may result in misleading the
court. Parties should be aware that the court has a variety of sanctions open to it where parties
use AI in breach of these guidelines and where their use has the potential to mislead the court.
76. For the avoidance of doubt, I wish to make it clear that I do not believe that the
defendant intended to mislead the court or that she was actually aware of the fact that some of
the cases cited in her written submissions were hallucinations. While it would have been
preferable if she had informed the plaintiff’s solicitors of her use of AI to produce her written
submissions, and thus to the likelihood that the authorities they could not readily locate were
hallucinations, I would not draw any adverse conclusion against her. At the time submissions
were prepared in this appeal, no guidance was available to litigants in person in relation to
their obligations to the other parties to the proceedings and the court as regards the use of AI
generated material in proceedings.




You must be logged in to post a comment.