Should law firms use AI to write legal articles?
The use of AI tools such as ChatGPT in legal article writing has become increasingly common. While these tools may potentially offer both time and cost savings, they also pose significant risks. In this article, I consider the risks of using AI to generate law firm articles, whether these risks mean the use of AI is ever acceptable and what steps to take to mitigate them if you go ahead regardless.
Some cautionary legal tales
Two recent cases should set alarm bells ringing in the legal sector.
One of the first UK cases to highlight the dangers of AI-generated content is Harber v HMRC. In this case, the taxpayer's representative submitted arguments supported by cases that, upon scrutiny, were found not to exist. The tribunal said that providing authorities that are not genuine “is a serious and important issue”. It identified the non-existent cases as an AI “hallucination”, which is the term used to describe an AI output that is misleading, factually incorrect or entirely fabricated yet convincingly presented as true.
In another recent case, a judge criticised a barrister and her instructing solicitors after five fake cases were cited in a legal submission, including one said to be from the Court of Appeal. Mr Justice Ritchie rejected the barrister and solicitor’s explanation of how this happened, saying it made no sense. Although the use of AI was suggested as the reason by the defendant’s barrister, this was not confirmed, as the barrister was not called to give evidence. The judge said that "providing a fake description of five fake cases, including a Court of Appeal case, qualifies quite clearly as professional misconduct.” The matter has been referred to the Bar Standards Board and the Solicitors Regulation Authority (SRA) for investigation.
The prevalence of AI "hallucinations"
The SRA has acknowledged the risks associated with AI hallucinations. In its Risk Outlook report of November 2023, it said: “All computers can make mistakes. AI language models such as ChatGPT, however, can be more prone to this… and the impact on affected people can be severe. Statistical errors by human witnesses have led to miscarriages of justice in the past, and there is evidence that people may place more trust in computers than in humans.”
These issues are not limited to non-specialist language models either. A study by Stanford University last year found that even legal-specific AI tools such as Lexis+ AI and Westlaw AI-Assisted Research produced hallucination rates between 17% and 34%.
What are the implications for law firm articles?
While the cases mentioned above involve legal documents and pleadings, the lessons extend to legal articles. If an article or insight piece is found to contain mistakes or fabricated information, it risks causing serious damage to a firm’s reputation and eroding client trust. After all, if a firm is getting its non-client-facing legal content wrong, what else is it getting wrong? A client would be entitled to question whether the legal, ‘human’, expertise it is paying for is perhaps being produced by AI. And what’s more, it may be wrong.
Should law firms ever use AI when producing articles?
There is an argument for saying AI should never be used when writing legal articles. Apart from the risk that AI may produce made-up or inaccurate information, there are additional concerns. The SRA’s principles require solicitors to act with integrity and state that “public trust and confidence … is at the heart of the legal system”. Misuse of AI would undoubtedly breach these principles.
In addition, AI has severe limitations, most notable of which is the fact that it isn’t human. It lacks the ability to understand legal nuance or offer the critical analysis only a lawyer can provide. AI may be able to mimic legal insight, but its output will never be an authentic representation of the firm or the writer’s legal expertise and real-world experience.
Furthermore, the writing of legal articles should be considered not only from the reader’s point of view but from the writer’s, too. The ability to research, analyse and write about a legal issue is an essential skill for trainee and junior lawyers to learn. If they can side-step this by inserting a prompt into an AI tool and simply regurgitating the result, they are missing out on acquiring some of the vital skills they will require as their career progresses.
What should firms do if they decide to use AI anyway?
Some firms may decide to proceed with using AI tools despite these risks. If so, they should consider, as a minimum, taking the following steps:
Set out strict verification protocols: All AI-generated content should be reviewed and signed off by a qualified legal professional before being published. The writer should confirm the extent to which they have used AI tools when writing the article and how they have corroborated the information provided.
Produce an AI usage policy: Firms should establish clear guidelines outlining acceptable uses of AI tools, including requirements for human oversight and verification.
Provide training on AI tools: All staff should receive training on using AI tools to understand their capabilities and limitations.
Stay informed on legal developments: Firms should keep abreast of evolving legal standards and ethical guidelines related to AI use in the legal sector.
Conclusion
While AI tools can assist in drafting content, they are not a substitute for human expertise and legal judgment. Law firms must exercise extreme caution when using them. If firms choose to use AI, implementing strict protocols is crucial to upholding integrity, meeting professional standards and maintaining client trust.