‘Is it going to destroy humankind? No. The good parts are worth pursuing’

AI may appear to be a relatively nascent development but in reality this is far from the case. John McCarthy first coined the term back in 1956, and since then we have seen IBM’s Deep Blue and Watson machines beat chess and Jeopardy champions, and Apple create its virtual assistant, Siri. Now, the rise of generative AI models such as ChatGPT have not only significantly changed the performance of AI but have also caught the attention of the mainstream media, exploding into the public consciousness with their accessibility.

At a foundational level, AI uses computer science and datasets to enable problem-solving. The technology takes on a human-like function – learning, reading, writing, analysing and researching. AI can be applied to an extensive range of systems and products, from customer service and recommendation engines to supply chains and document creation, which effectively creates a new world of possibilities.

At first glance then, developing and implementing AI tools is an exciting prospect for many companies and legal teams. As CMS IP partner Rachel Free points out, ‘lots of clients and legal teams want to embrace this new technology as it gives so much promise and new ways of working. In that excitement, legal risk can be sidelined.’

So, what legal risks are relevant to AI and how should businesses be navigating this?

New tech, new law: legal challenges surrounding AI

A data-driven world

While the application of AI tools remains extensive, they share a fundamental underlying feature: data usage. Data is the backbone of AI technology – platforms are trained on large datasets, which the AI then uses to make judgments and predictions in response to a prompt, and they are often deployed in a way that requires data. This means that, from a legal perspective, data privacy is a key area of risk.

Ashley Williams, who co-leads the technology transactions group at Mishcon de Reya, succinctly highlights the two data headaches: ‘The first data issue is the accessibility of training data sets, be it generative AI or Large Language Models (LLMs) or other forms of AI – there is always a large amount of data used to train the AI. The second data issue is can I lawfully use this data to train the algorithm? When I am using the AI solution and inputting that data, do I have the right to do this and what secondary use will be made of this data? Users have valid concerns that the data they are putting into these AI solutions are being used to go back into the training model.’

The IP framework we have globally has struggled with emerging tech generally and AI is really now testing the boundaries.
Ashley Williams, Mishcon de Reya

Samsung employees encountered this issue, having mistakenly entered confidential data for a new program into the AI, alongside utilising AI to convert confidential internal meeting notes into a presentation, seemingly without awareness that the AI retains this information. Such an eventuality risks breaching GDPR’s robust data protection framework, risking harsh fines for those that violate the regulations.

Client confidentiality and trade secrets can therefore be at risk of exposure, creating potential reputational risk. It is particularly problematic because once the data has been inputted it can’t be retrieved from the AI.

The reputational fallout from something like this cannot be underestimated – the power of social media can turn a tide against a business very rapidly and inflict irreversible damage. In an increasingly online world, data privacy is a concerning issue for many.

Notably, Italy’s Data Protection Authority has recently concluded that OpenAI’s chatbot breaches data protection rules, and this is a developing issue around the world. As a result, it’s possible that there will be a rise in litigation regarding data privacy before the courts, making it a key legal area for consideration.

AI in IP: a creative and global scale issue

Generative AI, with its ability to create verbal, visual and audible products, possesses creative attributes we generally associate with humans, which has caused considerable disruption in the world of intellectual property.

The technology is capable of creating products that are similar to existing copyrighted material, and creatives are particularly concerned about the use of their content for the training of AI tools. From a legal angle, Williams points out that it is ‘fair to say that the IP framework we have globally has struggled with emerging tech generally and AI is really now testing the boundaries. Questions around ownership, lawful use and fair use, particularly around the training data stage, are proving problematic.’

Cases are surfacing in the courts, with one example being Getty Images v Stability AI. Getty Images claims that Stability AI has incorporated its images in training data sets for its AI tool – Stable Diffusion, alongside claiming that the AI’s generated outputs reproduce Getty’s copyright works. We have also seen significant vocalisation from other creatives on this issue, with Hollywood writers going on strike in May 2023, partly over fears around AI being used by production companies for writing, and Marvel’s use of AI-generated promotional posters causing a stir on social media, galvanising conversations around the implications that AI technology has for employment. Evidently, the clashing of creatives and technology providers makes IP an area of high legal risk for those operating in this space.

There’s going to be widescale AI – it’s the next generation of IT – but it is capable of causing harm and is going to be regulated.
Minesh Tanna, Simmons & Simmons

What is proving particularly tough from a legal standpoint is the global scale of these issues, because, as Williams identifies, ‘AI and algorithms don’t recognise geographical borders.’

Minesh Tanna, global AI lead at Simmons & Simmons, explains that ‘this is particularly complex because the legal regime can vary between jurisdictions whereas at least across Europe from a data privacy perspective, the GDPR brings at least some harmonisation of the issues. IP is an interesting international legal issue.’

Williams echoes this, highlighting the contrasting approaches taken to AI regulation across the globe: ‘China has more of a rules-based approach and looking more at the impact on society; the US has arguably been a bit more reactive in their approach and quite state-driven; and then we have the EU which is the high-water threshold and has adopted a rules-based system through the EU AI Act and then the UK which is pro-innovation, principles-driven and looking to regulators to create guidance around this.’

From an IP perspective, establishing global scale protections for copyrighted works proves quite difficult in light of this varying global approach. Uncertainty surrounding the parameters of AI usage in this space presents significant legal challenges and needs to be approached rigorously and strategically.

Law vs tech

A legal lacuna exists around the regulation of AI, creating a considerable amount of uncertainty; as Bristows partner Chris Holder identifies: ‘We have spent hundreds of years developing jurisprudence based on human constructs – companies, partnerships and individuals as legal entities – and now you’re throwing machines into the equation.’

The EU AI Act attempts to address the challenge of applying old law to new tech. As the most stringent global regulation in this area to date, companies around the world are analysing and assessing its implications. The Act sets out different risk categorisations and takes a blanket approach to regulation, applying horizontally across sectors.

Holder emphasises the problems of this approach, stating that: ‘AI will have such an impact in society across all areas, which makes a horizontal Act like the EU AI Act quite difficult as you’re regulating the technology across everything. It is much more sensible to regulate the impact of technology and the outputs of these AI models within the various industrial sectors.’

Even the basic task of defining AI causes problems from a legal perspective, especially as the rate of technological development is rapidly outpacing regulation. In relation to a section of the Act dealing with general purpose AI Tanna highlights that: ‘The danger is you have to define the technology itself; you can’t rely on how it’s used. The technology moves very quickly, so how do you know how you’re regulating technology is going to be fit for purpose in four years’ time? Will we have to fundamentally amend the Act?’

Creating conceptual parameters will be tricky, which risks further uncertainty for suppliers and developers in the regulatory space. OpenAI has recently released a new text-to-video function called Sora, which demonstrates the rapid evolution and advancement of AI. Law and regulation need to keep up with these developments; if they are unable to do so, it widens the scope for potential problems for developers, suppliers and consumers.

Navigating an uncertain regulatory landscape

While the task of traversing an unprecedented regulatory landscape may appear daunting, there are ways businesses can mitigate their legal risk.

Tanna says: ‘AI governance is about ensuring you have proper processes, structures, and risk mitigation measures in place to deal with AI. There’s going to be widescale AI – it’s the next generation of IT – but it is capable of causing harm and is going to be regulated.’

While AI technology has the potential to fundamentally alter the business world, it is worth considering that it may not be appropriate to implement the technology in every capacity.

Williams suggests a good starting point: ‘[Ask yourself] what is the problem and is AI the solution? Because sometimes a problem can be dealt with in a more simplistic way.’

Holder echoes this, adding: ‘we sometimes need to dial back the absolute reliance on technology.’ Notably, when a US attorney used ChatGPT to prepare a legal brief, the tool actually made up its own caselaw.

Given the associated risks, including AI hallucinations, Williams emphasises ‘do not develop AI solutions for a nice-to-have sort of problem, that’s not how customers are thinking about buying. They should develop AI solutions that fix a bigger society problem. We have seen this problem in other areas of emerging tech such as blockchain, where companies have developed blockchain technology for some use cases that are completely unnecessary.’

Instead, businesses should remain pragmatic, developing and deploying technology where it enhances the purpose and effectiveness of a product, system or service.

Looking at it from a different angle, how should AI suppliers approach and navigate this uncertainty?

Williams states that ‘if we are advising an AI supplier, our advice is a bit more crystal ball gazing, trying to understand what that high water mark will be and what potential regulatory changes could impact future business plans. Our approach is to identify the highwater mark and help companies understand how to comply with that. With any framework there will always be areas where the legislators have penned something which practically will not work and that is for the market to correct.’

He adds: ‘We will see this with the EU AI act. For example, the Act refers to an ‘accurate and complete’ data set, but most will tell you there is no such thing, and the market will dictate what steps AI companies need to take to evidence compliance here. Understanding the high water mark and anticipating and influencing market positions is how we are focusing our advice on the supplier side.’

It is absolutely essential for developers and suppliers to be aware of these risks. Williams adds that from a risk perspective, it is useful to understand where your use of AI fits against the EU’s risk categorisation which ranges from banned practice through to high risk. He adds: ‘When you know where you fit on that pyramid, you know what you need to comply with and understand your risk profile and what likely liabilities are landing on your desk. There is then a contractual piece that wraps around this, so if you’re reliant on suppliers or third parties, it is important to make sure you’re getting the right contractual protections from them and producing something sensible to flow down to the end customer, which reflects where the risk really should sit.’

Ultimately, as a society, there is a question to be asked – would you trust a human being over a machine?
Chris Holder, Bristows

Holder also highlights the importance of licensing here: ‘Large companies like Google and Meta… I’m sure they are engaging with IP owners in order to effectively cleanse their data sets of any non-infringing parts.’

The issues that arise when a company does not have proper frameworks or processes are clear, with litigation like that faced by Midjourney over the content of its training data sets a real risk.

Corporates in this position should think about the impact of both regulatory changes and pending litigation suits, with Williams recommending that they ‘put aside a pot of money for licensing in data, which spoiler alert, you are going to have to start doing to ensure your use of training data is lawful.’

The risks should not deter businesses from developing, supplying and integrating AI in an effective and pragmatic way though. Rather, it is important to understand as far as possible how the existing regulatory frameworks operate, and how to mitigate risks in a way that is acceptable to regulators. As Tanna explains: ‘The key really is exploring how AI could be used in a positive way generally, and to do it safely and responsibly.’

Where next?

AI could change the world but the legal risks remain high without sufficient strategic planning and preparation. The speed of technological development cultivates a simultaneously uncertain but exciting environment.

It’s important that risks do not deter companies from developing or integrating AI where pragmatic and useful to do so.

As Tanna stresses: ‘it’s not too soon to start thinking about AI. It may be intimidating as it is a new and complex form of technology. But now is the time to start thinking about it.’

Given that from a legal, regulatory and moral standpoint, the concerns around AI are rooted in uncertainty, it’s important to ask where this technology is heading. Tanna says that in the future AI solutions are likely to become more specialised, arguably making it less likely that broad laws and regulations will keep pace with technology.

Holder asks: ‘Ultimately, as a society, there is a question to be asked – would you trust a human being over a machine? For some things probably, but for some things probably not. Where is that line? The regulators and courts will end up here.’

Overall, the focus on AI should be on harnessing its positive benefits in a safe and responsible manner – and as Holder stresses we are far from an ‘I, Robot’ situation.

‘Is AI going to take over the world? It will from a business and societal sense. Is it going to destroy humankind? No. The Daily Mail headline about the Terminator taking over the world is just not going to happen. It’s an amazing opportunity, it will have its issues and bad actors, but the good parts are still worth pursuing.’