How to navigate AI deals: legal considerations for a successful acquisition

Introduction

The artificial intelligence (AI) revolution has emphasised the strategic importance for businesses to integrate AI capabilities to enhance business efficiencies, capabilities and offerings whilst managing the attendant risks. The AI industry has witnessed a surge in mergers and acquisitions (M&A) activity totalling US$31.2bn across 362 transactions between 2021 to 20231, whilst NVIDIA (NASDAQ: NVDA), one of the global leaders in AI chips, has seen its market capitalisation exceed the USD2tn mark. The trend clearly underscores the growing significance of AI adoption and its transformative potential (and necessity) through strategic acquisitions.

However, AI adoption comes with regulatory and legal risks, particularly in the context of increased AI M&A activity where buyers are stepping into a complex matrix of intellectual property, data science, software tools and licences, employee confidentiality and ethical issues. It is therefore critical for buyers to be well equipped to manage these risks especially in the context of evolving global AI regulatory frameworks.

AI-specific risks

Due diligence of AI businesses needs to be adequately considered and managed to uncover the unique risks of such technologies:

Regulatory

It is crucial to understand the regulatory landscape applicable to the target’s business and the implications of AI-specific legislation in the relevant jurisdictions in relation to the use, development, or deployment of AI technologies.

Intellectual property

The value of the target’s AI technology is often related to its intellectual property rights (IPR) in the technology, its output and the underlying datasets. Accordingly, it is essential to confirm the target’s ownership of IPR and investigate any reliance on third-party IPR and open-source code. It is also critical to determine the source of training data and whether there are any restrictions on its commercial use as many AI systems have been trained on publicly available data that may nonetheless be proprietary to third parties as evidenced by the increasing number of lawsuits in the US involving copyright owners and AI technology developers.

Commercial contracts

The target’s commercial contracts may contain limitations and restrictions on the assignment of licensed AI tools, which could pose challenges in seamlessly integrating them into the buyer’s business during post-completion integration. Attention should also be given to confidentiality obligations, third-party IPR, and personal data obligations as breaches due to the target’s past and current use of AI tools could potentially expose it to liability, particularly where the commercial contracts contain unlimited or excessively high liability caps.

Employment

To mitigate the risk of third party IPR infringement, breach of confidence, personal data breaches, and/or cybersecurity concerns, the adequacy of employee training in the use of AI tools and their associated risks should be verified. In the same vein, it would be important to ensure that the employees are also bound by legal obligations (whether under the employment agreements or otherwise) on the use and disclosure of confidential information.

Data protection

Where the use of an AI system involves the incorporation of personal data, it is important to ensure the target’s compliance with applicable data protection laws both during development of the AI technology and its use. Further, it is important to investigate the sources of datasets used for AI training and the practices for anonymizing/pseudonymizing personal data. This includes identifying whether there was consent to use the personal data for self-training (where applicable) and whether there are robust measures in place to protect personal data and other sensitive information throughout the AI lifecycle.

Ethics

Laws in a number of countries may require AI systems to comply with ethical standards and it is therefore necessary to understand what standards or frameworks were used to develop the AI system. The reliability of the AI system’s algorithm is also important to mitigate any biases, prejudices, and discrimination in its training processes. This would involve an understanding of, among other things, the training process, testing procedures, known errors or bugs, and the target’s codes and policies on the ethical development of AI.

Contractual provisions to minimise risks

Robust warranties in the transaction documents are important for buyers to effectively manage AI-related risks in M&A transactions and flush out comprehensive disclosures regarding the compliance levels of the target’s AI technologies. Such warranties should encompass various aspects of the AI technology, including:

• compliance with laws and adherence to industry best practices;

• measures to mitigate inherent technology risk and potential reputational damage concerns;

• data security, privacy, and governance matters as to the target’s acquisition of training data, internal processes for data protection, compliance with data protection laws, and ethical considerations in the use of sensitive information and personal data;

• ethics in AI decision-making processes which focus on transparency in the decision-making process and how potential biases are mitigated;

• long-term viability to ensure the AI system’s sustained functionality;

• AI performance in relation to well-defined parameters and its adherence to quality and security standards;

• IP protection with respect to the ownership of or valid licenses for relevant IPR, non-infringement of third-party IPR, maintenance of IPR and other intangible assets, and ownership of work product;

• technical maintenance confirming regular software updates, patches, and technical support; and

• cybersecurity measures and measures to mitigate potential vulnerabilities.

Buyers may also wish to consider the inclusion of indemnities and closing conditions where the diligence process has uncovered potential areas of liability (such as negligence and breach of contract) under the target’s commercial arrangements.

Conclusion

Acquisition of AI systems poses a number of potential legal and regulatory risks. Navigating the legal considerations in AI M&A transactions requires a comprehensive understanding of these unique risks. Regulatory compliance is anticipated to become much more challenging as new laws are tabled in an area that is likely to become heavily regulated. Thorough due diligence and contractual protections can enable buyers to effectively manage the complexities of AI M&A transactions from a legal perspective.

However, due consideration should also be given to the post-completion integration process in order to unlock the transformative potential of AI technology. AI laws in some countries have extraterritorial applicability and therefore, businesses will need to map out where the acquired AI technologies will be used in order determine what regulations must be complied with.

Notes

  1. https://imaa-institute.org/publications/m-and-a-activity-ai-software-industry-2024/