Generative AI is set to transform the way in-house lawyers work. Legal 500 London editor Cameron Purse moderated a recent panel discussion in Edinburgh sponsored by Addleshaw Goddard in which Candice Donnelly, director of corporate (legal) at Skyscanner and Colin Telford, senior legal counsel, NatWest, joined Addleshaw partner Ross McKenzie and head of innovation Kerry Westland to share their experiences.
Cameron Purse: Talk us through your professional journeys with AI, and generative AI in particular. When did it become a tangible part of your working lives?
Candice Donnelly, director of corporate (legal), Skyscanner: For us, AI is not really that revolutionary. It’s just an evolution in terms of the tools that we have to offer. We have good search functionality across our sites. I don’t really think of this as being a big change in how we do business. We’ve been looking at how we can integrate it into Skyscanner’s own product; what we can do to capture the excitement around AI and use it for internal efficiencies and the way that the customer would engage with our product.
Colin Telford, senior legal counsel, NatWest: The starting pistol was November 2022 and Chat GPT. It gave everybody a sense of what Generative AI was and since then, as a team, we’ve gone about trying to learn about it in a more structured way. I think there’s mainly two buckets. Firstly, what did we need to know to advise the business on Generative AI offerings that we may want to build or buy in? But then secondly, what do we want to know about it?
Cameron Purse: This technology is set to promise a lot in terms of efficiency and labour. But before you can see any potential gains you need to pinpoint which parts of the work can benefit from AI. What does that process look like and what challenges have you come up against as you’ve made those decisions?
Candice Donnelly: That’s part of the struggle we have at the moment. Some of these tools are quite expensive and we’re asking: what will they actually provide for us? We’ve already introduced a lot of automation into our company. We’re a small team. We like the idea of adding these new tools, getting more automation, removing a lot of that day-to-day work.
Kerry Westland, head of innovation and legal tech, Addleshaw Goddard: We’ve been trying to solve certain problems for a long time. Take document review for example. Old machine learning never quite got us to exactly where we wanted to be and that’s where we’ve probably seen some of the biggest gains. We’re capitalising on the excitement around Generative AI but, for us, it’s been about focusing on where we have always struggled.
Ross McKenzie, partner, Addleshaw Goddard: Speaking as a data privacy lawyer who’s having an identity crisis and trying to become an AI lawyer, I see the benefit in the privacy space and the document review space. We have some technology already but there’s still a big degree of human effort required. Looking at email exchanges and working out what redactions are needed, for example. I’m excited to see that kind of layering up of technology that we already use to see what efficiencies we can get. Because ultimately, there is an expectation from clients that we can already do this.
Colin Telford: Choosing where to target Generative AI, or any technology, to solve problems for you is dangerous. It’s tempting to think of it as a silver bullet. In our team, we break down what we do into three areas: technology, people and practice, and it’s the latter which Gen AI is most likely to solve. We’ve done a lot of work to understand contract automation, generate playbooks for all our standard terms, and on the back of that, we got a third-party tech provider to build a Gen AI tool for us. We’re about to get our hands on that and we’re quite excited about that. But I think regardless of whether you’ve got budget or not, you need to ask yourself if you’re ready for it.
Cameron Purse: How do you manage pressure and expectations, from internal and external stakeholders – both scepticism around AI and unchecked enthusiasm for it?
Colin Telford: Working in a bank with a robust risk function, unchecked enthusiasm isn’t much of a problem. The scepticism makes things more interesting. We’ve made sure to go through the process of actually playing about with it. Attending hackathons, working on the prompts and engineering side ourselves, and seeing what it throws back. That’s what’s key to winning hearts and minds. It’s easy to be sceptical, but when you see the power of what it can return and how quick it is, it can really take care of the grunt work and get us closer to where we want to be.
Kerry Westland: I’ve met people who have told me: “I’m not going to use it, it makes up cases” so it’s important to be able to explain and contextualise what’s happened there and how to work with the technology. The education around it is what’s helping. When it goes wrong for people, we want to be able to explain why that’s happened. I love the enthusiasm, and it’s really been building over the past 18 months. We have more and more people asking us what the technology can do for them. It’s our job to help them understand the right use cases for it.
Candice Donnelly: We certainly don’t lack enthusiasm at Skyscanner. We probably want to write the thing ourselves and start our own OpenAI! We’ve got many, many ideas, but it’s about understanding how to implement them in a controlled and focused way. We’re at an advantage in that we are our own data provider. But we want to make sure that, if we are partnering with people, that we are not losing control of the data that’s being used to educate and populate these models. So sometimes we think that we can do a really great partnership here but the law hasn’t caught up in terms of the ownership piece. As a legal team, we’re equally enthusiastic about trialing the models but we haven’t firmed up how it will work in the long run. And that’s still the area that’s developing all the time.
Cameron Purse: Can you give us an example of a concrete improvement you’ve seen as a result of AI, and how the results squared with your expectations?
Kerry Westland: For us, it’s contract review. It’s dramatically sped the process up, and it frees us up to give more in-depth advice on the risks and things like that. In my team, we are healthily sceptical of new technology. We don’t swap it in and out on a whim. We’ve had lots of people trying to sell us AI tools over the last few years and they ultimately weren’t better than what we already had. So why would we swap it out? When our team got their hands on [Thomson Reuters generative AI product] CoCounsel, they were literally like, “Can I have it now? Can I have it now?” because it was it really was that next level. Seeing my team react like that and then seeing our lawyers react like that, made me realise that there really was something in these tools.
Candice Donnelly: Our legal team went through a mass automation process a few years ago, so we haven’t really wanted to revisit that again. It’s more how the business uses it for creativity and inspiration. So for example, we’re using it at what we call the ‘top of the funnel’ where we try to attract people who are looking for travel inspiration. We’re very keen on sustainability and doing what we can to balance the impact of travel. It’s about making sure that not everyone is going to Venice and Dubrovnik, we want people to visit other places as well. So, as a prompt, we typed in ‘less common places to visit’. But it was giving us results like Somalia and Afghanistan. These generative AI tools are highly creative, but ensuring results which are practical rather than just technically correct still needs work. And crucially, human intervention.
Ross McKenzie: When we’re advising on large volumes of contracts, it’s giving lawyers that crucial headspace time. I think that’s what we’re all looking for in our jobs, which are busier than ever. There’s so much information being thrown at us all the time. If ChatGPT can start managing my emails that would be really appreciated! We’re all in a very busy environment where we start to lose a little bit of understanding what actually matters, because we’re always trying to move on to the next thing.
Candice Donnelly: It’s also easy to become too much of a perfectionist about these tools. A solid, pragmatic result is often fine; you don’t need perfection on an everyday basis.
Colin Telford:I don’t have any concrete examples yet but I don’t think that’s unusual. Research suggests that only 2% of lawyers are using Gen AI tools on a daily basis, and even only 9% on a weekly basis. It’s still an add-on at this stage, but I do see that there’s only one direction of travel.
Cameron Purse: Can you tell us how private practice firms are working with in-house teams to make the most of AI tools and what strategies are most productive when you’re working together?
Kerry Westland: The data we’re using is client data, so actually working with our clients on their particular data and their particular contracts is really useful. We’ve always suffered from vendors showing us the most perfect contract possible. But contracts aren’t like that. Leases aren’t like that. Working on it with the real thing majorly helps, as well as being open and honest throughout.
Ross McKenzie: There’s been a mindset change around understanding the information that a legal team holds. It’s not just stored on a PDF and kept in a random folder. It covers everything from liability risk to termination clauses. It takes a long time to change that approach but it’s worth it, because once you have that information, it’s amazing what you can do with it. We know that most legal teams aren’t quite there yet, but if we can start to do that now, all of these tools will go a long way to make this more alive and workable.
Colin Telford: With Gen AI tools, you can segregate and ringfence data. If a firm was able to use the data which they already have from previous transactions and provide a better service, then I think that’s something that we would be interested in, but doing that in a secure way that everybody’s happy with is probably the challenge.
Candice Donnelly: We haven’t had a conversation with any firm about using AI and whether or not we would permit it. At this stage, it’s probably not a huge concern. But I am interested to see what the different types of output look like. I wonder if we’ll start to see many more needlessly detailed markups which are just the results of two computers speaking to each other. And again, it goes back to that question: What problems is it solving? Is it a benefit to us?
Kerry Westland: That reminds me of a cartoon I saw where somebody’s written some bullet point notes and asked ChatGPT to turn it into a long, beautifully worded email. And then the person it’s sent to uses ChatGPT to put it to bullet points.
Cameron Purse: Data is one of the most frequently cited anxieties around AI. How have conversations around data protection been evolving? What can lawyers do to reassure people?
Ross McKenzie: Data protection law only regulates the use of personal data. The UK regulator, the Information Commissioner’s Office, has pointed out that there’s already a law in place. For managing risk when it comes to using personal data, the GDPR framework is a really good place to start in your AI journey. It has all the tools you need around risk assessments through things like privacy impact assessments, transparency, obligations etc. All of these themes are covered. If you’re trying to make decisions regarding AI output and the lawful basis for it, there’s loads of materials out there. We’ve been advising on this stuff for ages. With respect of the EU AI act, it will have an impact across the UK, notwithstanding the fact that it doesn’t necessarily apply to UK based operations. It will regulate the use of these large language models. But, like any EU law, there will be a ripple effect. Organisations will have to be more transparent. OpenAI were recently asked where they get their data from, and some of the answers didn’t exactly leave me convinced. So I think we’ll see the EU regime make a big difference in terms of our understanding of what these large language models are doing.
Kerry Westland: The interesting thing for me – and I have no idea how this will work as far as the regulation goes – is that it’s like a drop of water in an ocean. There are almost two trillion parameters within OpenAI as a model. So, even if you put data in and got it back out, there’s no way of anyone knowing it was that particular drop, so to speak. It’s vital to understand that. We’re certainly starting to see it come through in panel terms now, but some of those requests are unclear as far as this question is concerned. We are also starting to see more questions about whether fine tuning and training on data is actually particularly useful. With these bespoke legal large language models, you can lose the creativity and knowledge of the language of the broader ones.
Candice Donnelly: The market is so fractured right now. Many people are trying to create something unique that stands apart from the OpenAI model, but we really don’t know where this will end up a few years from now.
Cameron Purse: Before we close, I’d like us to spend some time talking about what this latest iteration of AI means for the people working with it. If the technology is getting stronger all the time, hallucinating less and less, what does that mean going forward? What kind of lawyers will thrive, and who’s at risk of being left behind?
Colin Telford: It’s a tough question! LexisNexis pitted 20 top US lawyers against AI and set them to task reviewing an NDA. On average, the AI was 95% accurate while the humans were only 85% accurate. But the really terrifying thing is that it took the human an average of 90 minutes, while it only took the LLM 26 seconds. I think the logical conclusion is that we need to change our job. We need to be some sort of Yoda figure within our organisations. I don’t know what it’ll look like in five years or ten years or even five months with the way things are going. But I do think our role will change and we need to have a broader set of skills. And as much as it’s terrifying, hopefully there’s opportunity there as well.
Candice Donnelly: I’m genuinely not frightened by it. Our job is to always adapt. It’s just another step on that journey of adaptation. There are two things that worried me about ChatGPT when it first came out. The first is the impact on training ability. We’ve all learned our skills through graft and labour intensive work. If you haven’t got that, how do you distinguish the good from the bad? The second is the lack of creativity. By definition, these algorithms are producing the norm. We’re here to think outside the box and come up with creative solutions. If it frees up my time to be more strategic and more creative, then that’s great. That’s the fun part of my job. I don’t really want to spend two hours a week reviewing NDAs. I want to do something interesting which is going to add value. In an in-house team, where we’re so tight on resources, I do think it provides a great opportunity.
Kerry Westland: I get what people mean when they say you have to do the labour-intensive graft as a junior lawyer, but how much graft is ultimately productive? Everyone’s been asking us what it means for the juniors, but our trainees and juniors are the people using the tools the most right now. They don’t want to be there at three o’clock in the morning doing contract reviews.
Ross McKenzie:It’s so important to make sure you’re engaging with everyone in your team. Everybody’s got a voice. It doesn’t matter if it’s a trainee, a paralegal, an associate. Everybody has something valuable to contribute to this. This is particularly apparent when I see the trainee usage stats for ChatGPT compared to the partner usage stats.
The panellists
- Cameron Purse London editor, Legal 500
- Ross McKenzie Partner, Addleshaw Goddard
- Kerry Westland Head of innovation, Addleshaw Goddard
- Candice Donnelly Director of corporate (legal), Skyscanner
- Colin Telford Senior legal counsel, NatWest