Ken MacDonald discusses the opportunities and risks of using artificial intelligence on international arbitration.
Artificial intelligence has significantly impacted the way we lead our lives and, today, many established applications of it exist in both our personal and business spheres.
The use of AI as a tool for improving productivity is well understood with the ability to generate quicker and (with human oversight) more accurate outcomes in many situations. It is only natural therefore that it be deployed in dispute resolution and in particular international arbitration, a means by which a third-party determiner known as an arbitrator decides a dispute between parties from different jurisdictions and produces a legally binding decision in the form of an award.
With its rise comes opportunities and risks. Appropriate use of AI will streamline legal processes yielding cost savings and delivering greater efficiency in how parties’ legal teams and arbitrators engage, but what are the risks?
Use and opportunity
AI platforms can efficiently search through millions of documents for key terms or phrases and assist with collation of information and its presentation in any format the user requires. Applying this to the costly and otherwise labour intensive disclosure process for exchange of documents in arbitration is self-evident, with AI’s indefatigable ability to churn through gigabytes of data to distinguish the relevant from irrelevant. The collation of data can be used for the presentation of cases through the production of any written material required be that accurate meeting summaries, content structured interviews (for creation of witness statements), pleadings and all forms of written application.
One of the most important enabling technologies from AI is speech recognition through which we can translate, interpret, and transcribe. With recorded hearings, real time transcription via technology with speaker identification for all participating is available. Machines can be used for interpretation so that the language a question is put to a witness in is changed into the language agreed for the purposes of the arbitration. And there is machine translation in document heavy arbitrations, where large numbers of documents are converted into the language of the user or the language of the arbitration seamlessly and at a fraction of the cost if AI was not deployed.
AI for legal research is already an established and powerful tool. Machines are capable of digesting huge volumes of data more quickly than humans, albeit risk awaits those placing over-reliance on the accuracy of all outputs.
Greater future use of AI will likely include predictive legal analysis to risk assess potential lines of argument or case prospects for settlement negotiations, widespread arbitrator selection by parties, or parties’ nominated arbitrators selecting their chair, by mining data on potential candidates, and ultimately more involvement in the arbitral decision-making process.
Risks
But what of the dangers? Open data uses runs risk of confidential information escaping into the electronic ether. Managing data with great care is essential to avoid data breaches, civil action for breach of confidence and reputational damage.
Machines abhor data vacuums so will invent answers convincingly rather than refrain from providing solutions. This has been described as machine hallucination and is precisely why legal teams must limit AI use to being a useful tool and not a substitute for independent critical thinking. There can be no sensible compromise on appropriate human review to check that all work achieves required standards. There have already been cases where attorneys used generative AI in written court submissions which fantasised some of the legal sources with the resultant consequence of court censure, not for the attorneys using AI, but for failing to check the provenance of results.
Boundaries
Current regulation is sparse but will increasingly develop as a game of catch up is played by regulators. Beyond regulatory boundaries though there are more fundamental questions: what does it mean to be human, and can we ever accept justice from a machine?
Machine determination already happens – eBay, to take but one example, resolves most consumer disputes using algorithms. However, in international arbitration where the stakes are significantly higher in terms of complexity and value of dispute, will parties accept an award from a non-human? The use of AI as a tool by arbitrators can be envisaged but how an AI arbitrator could control the arbitral process, and particularly the hearing if party behaviour is unreasonable, is unclear.
Arbitrators will use AI, but in an arbitral panel of three, how do the human arbitrators interact with their non-human co-arbitrator? A more fundamental question is whether machines can address the nuance of decision-making given AI lacks emotional intelligence. In arbitration it is not just the final determination that is important, the process of how decisions are reached is also key.
As we do not yet fully understand how machines think – the so call black box problem – there will remain concerns about fairness and transparency.
Two questions then arise. As most appeals and challenges to recognition and enforcement of awards emerge from procedural unfairness grounds, how can a non-human arbitrator operate in a way to limit those?
Further, many national arbitration laws require characteristics from arbitrators that can only be ascribed to human attributes and experience so there is risk of challenge to awards in appeals and to their recognition and enforcement where they are produced by AI or with some level of AI input.
Brave new world
AI has already made an impression on all forms of dispute resolution including international arbitration. The way in which it will precipitate further change is to some extent predictable in terms of rolling out, subject to future regulation, applications for which there is obvious demand. But there will be developments in ways that we do not yet comprehend. The key must be to retain our ability for independent critical thinking – to draw from Aldous Huxley, we should never believe things simply because we have been conditioned to believe them.
Ken MacDonald is a partner and expert in international arbitration at Brodies