The use of AI in international arbitration – thoughts from the coalface

Artificial intelligence (AI) has generated a huge amount of press in the legal world. This buzz has not, however, necessarily reflected the extent to which AI has actually been used. That is quickly changing as parties become more confident with the use of AI and as appropriately mature tools become more readily available in the legal marketplace. 

The flaws of AI technologies have been well documented. Chat GPT and similar generative AI tools are prone to hallucination - where they create false citations or facts, to amplification of existing bias present in the underlying data set, or to problems with cybersecurity and client confidentiality. A cautionary tale came in June 2023, where two New York attorneys and their firm were fined $5,000 for misleading a New York court after filing a brief referring to non-existent case law generated by ChatGPT. 

While legal practitioners may be sanctioned if they fall short of their duties, algorithms have no such obligations. The judge in the New York case noted that ethics rules applicable to the lawyers imposed a “gatekeeping role” on them to ensure the accuracy of their filings, including their use of AI. AI tools cannot yet be relied on without careful human review. 

So, how can the use of AI in disputes be policed? Is it necessary – or even possible – to regulate its use? 

Some US judges have issued standing orders trying to grapple with this problem, requiring disclosure if AI has been used in drafting pleadings and certification that their accuracy has been verified (see orders issued in May and June 2023 by Texas and Pennsylvanian courts). Other US courts have issued more onerous orders requiring disclosure of the tool and manner of use of AI in legal research and in drafting any documents for filing. Canadian courts in Manitoba and Yukon have also issued general practice directions requiring disclosure of the use of AI and the manner of its use in any drafting or legal research. 

In US and Canadian courts, case management powers have so far been sufficient to regulate and sanction lawyers’ use of AI. However, such regulation does not yet seem to have tracked into arbitration practice. There is not yet any soft law on the use of AI, and the authors are not aware of any procedural orders dealing with AI. 

The US/Canadian court orders and practice directions have been very wide, requiring disclosure of all uses of AI in research or court filings. Are such wide disclosure requirements necessary in arbitration? Not all uses of AI impinge on a lawyer’s ethical duties, particularly if carefully reviewed. Very broad disclosure obligations may impose onerous burdens on a party. Further, in some jurisdictions, disclosure obligations may collide with litigation or work product doctrine-type privilege. Arguably, lawyers’ ethical duties and obligations to ensure the accuracy of their filings are sufficient to police this. However, the use of AI in preparation of evidence or drafting legal submissions privately by one party might also affect the proper course of the arbitral process, or result in unfairness for the other party.  It is also difficult to know if AI has been used. Tools exist to detect the use of AI but as the technology becomes increasingly sophisticated it may become almost impossible to know if drafts are prepared by AI. It is possible to imagine a scenario where failure to disclose the use of AI is almost undetectable.    

Procedural orders issued by arbitrators should perhaps be more specific than the court orders discussed above to ensure that disclosure remains reasonable and proportionate to risks associated with the use of AI in the administration of justice. They may require disclosure of the use of AI where it may affect evidence or the other parties’ ability to assess evidence, such as in translations, document review and production, expert reports and witness statements. Such disclosure could be given to the parties and the tribunal. While parties may not yet be confident in the technical capability of arbitrators to give complex directions regarding AI tools, disclosure like the above should give parties and tribunals comfort that they have sufficient information to assess the potential impact of the use of AI in a given arbitration. 

  

 

Claire Morel is a partner in BCLP’s International Arbitration group. She is qualified in England and Wales and in New York and has a mixed common/civil law background, having completed her legal education in Belgium and in the USA. She practices international arbitration as counsel, as advocate and arbitrator. Claire has a particular experience of disputes relating to technology, corporate transactions, licenses, cross-border sale or service agreements, as well as disputes involving secrecy, intellectual property and cybersecurity issues. She is co-founder of the award-winning initiative Mute Off Thursdays and a Board member of the Silicon Valley Arbitration and Mediation Center (SVAMC). She is a member of the ICC Taskforce on the use of Information Technology in arbitral proceedings, the ICC Taskforce on Disability Inclusion in International Arbitration, the IBA Arbitration Committee Taskforce on Privilege in International Arbitration, the Advisory Board of CyberArb and the Editorial Board of b-Arbitra. 

 

Siobhan Abraham is an associate in the International Arbitration practice at BCLP. She has acted on multiple complex cross-border disputes under a range of institutional rules, with a particular focus on the LCIA, ICC and ICSID. She has advised clients in various sectors, including property and construction, pharmaceuticals, space technology, M&A, private equity and financial services and regularly advises clients from a range of jurisdictions. She also regularly sits as tribunal secretary. She also has experience of commercial litigation in the High Court in London, and recently qualified as a solicitor advocate in the English courts.