By: Nick Martin


At a glance

  • AI solutions differ significantly from traditional technology products, requiring businesses to take a different approach in addressing the usual issues in technology contracts, such as data security,  IP ownership,  liability and service levels.
  • Conducting thorough due diligence and implementing robust testing or trial periods are crucial for customers to ensure AI solutions meet business needs and function as promised before full commitments are made.
  • Key negotiations often revolve around ownership of input/output data, intellectual property, and the vendor’s willingness to stand behind the AI’s outputs, highlighting the need for clearly drafted contractual provisions setting out each parties’ rights and obligations.


The rapid development and adoption of AI, particularly Generative AI, has seen many businesses exploring AI solutions for the first time. IT and Procurement teams are venturing into this uncharted territory, seeking tools that promise to streamline or even transform their existing processes.

However, AI solutions differ significantly from traditional technology and software products, both in how they function and the outputs they produce. This raises important questions about the technology contracts used to procure or license them. Will vendors and customers rely on the familiar, well-established contract terms developed over decades? Or does contracting for AI solutions demand a completely new approach, leaving customers to bear greater risks?

In this article, we examine the key legal issues to consider when negotiating contracts for AI solutions.

Due diligence, testing and verification

Given how rapidly AI solutions are developing and entering the market, it can sometimes be difficult for a customer to properly assess and verify the quality of the AI service it is buying. How can the customer ensure that the solution is suitable for its needs and fit for purpose before committing to buying it?

Initially, the customer will want to undertake appropriate due diligence, both internal – identifying the actual use case and reason the business needs the solution or tool and whether the activity is ‘high risk’, and external – in relation to the vendor, its track record, how it uses and access your data, and how it proposes to contract for the solution. From the vendor’s perspective, it is important to allow or facilitate to give the customer the comfort it needs. Moving forward, the AI solution may require fairly extensive testing to verify that it is functioning as intended (and as the vendor has represented).

The agreement will therefore likely need to cater for this process, and the parties may also consider the use of ‘pilots’ or trial periods to properly test the AI solution and how it is used. These acceptance testing provisions, or similar processes, should allow the customer to ascertain the quality of the AI solution’s functionality before the commercial and financial obligations under the agreement fully take effect.

Reliance on outputs

AI is still a developing technology, albeit one which has progressed at lightning speed over recent years, especially with the dawn of Generative AI, and for that reason, some vendors may be reluctant to ‘stand behind’ the outputs produced by the tool.

In other words, any reliance on AI outputs and their use by a customer in its business will potentially be at the customer’s own risk. However, that’s not to say that some minimum requirements cannot be applied to the outputs, and the vendor, if adopting a ‘no reliance’ position, may still agree to some limited obligations in respect of outputs, for example, that they are accurate and the solution does not create a risk of bias or discrimination, and that the solution and outputs from it comply with all applicable laws.

Fundamentally, in the absence of traditional ‘specifications’ or firm requirements that outputs from the solution or tool must meet, it is likely that a degree of human oversight will be required before the output can be relied on or put to work within a function or process of your business.

Intellectual property

As with more traditional technology procurements, the AI solution contract should deal with IP issues such as ownership and use rights in respect of the solution and its outputs. Given how Generative AI works, using large language models, a customer will often seek an assurance that the solution will not infringe third party IP and has been ‘trained’ without breaching applicable IP laws.

More broadly, and in the same way as for non-AI tech, a warranty that the vendor owns the AI solution or has the necessary licences or rights to allow the customer to use it in accordance with the agreement may also be appropriate. Thirdly, even if the customer’s reliance on outputs from the solution or tool is at its own risk, the question of ownership of those outputs also needs to be considered. As a minimum, a customer should be given a broad, perpetual and irrevocable licence to use those outputs for any purpose.

Data ownership and use rights

It is imperative that the contract for the AI solution or tool also addresses the question of data ownership and usage rights,and which party owns and/or can use the primary types of data that are relevant: the ‘input data’, being data / ‘prompts’ inputted into the AI tool by the user, and the ‘output data’, being the outputs which are produced using the AI tool in response to the inputs.

Customers often seek to claim ownership or control of the input data, especially if it is confidential or proprietary information, or if they are subject to professional obligations to their own clients. Conversely, a vendor will often want to have the right to use input data without restriction, including to further train the model, for its own benefit and that of the vendor’s other customers. The question of ownership of output data is often the subject of intense negotiations.

Regardless of the exact nature of the outputs, many customers wish to own output data because it was created using their prompts, whereas vendors may argue that they should own output data because it was created using their proprietary model. The parties should always be careful to finely craft the contractual language on input and output data ownership to reflect the agreed position.

Security

As an organisation’s use of AI solutions increases, it is likely that the amount of the organisation’s data fed into, or processed by, the AI solutions will also increase. The supply chain risk of a third party data breach impacting an organisation’s data is as prevalent in the context of an AI solution as elsewhere, especially where the solution may process large volumes of personal information.

This creates a real need for the contract to focus on data security and to include appropriate vendor obligations to ensure that data is subject to appropriate security protections. Is it important to the customer that its data remains on-shore? This may be problematic if the AI vendor and its infrastructure are located overseas. Is it appropriate for the vendor to comply with applicable industry security standards (e.g. ISO 27001)? What level of cooperation should the customer receive from the vendor if a cyber incident suffered by the vendor impacts the customer’s data, for example, assistance in identifying affected data sets?

In short, when the customer is placing its data under an AI solution vendor’s control, even temporarily, it will want to mitigate the risks associated with this. The agreement will need to include appropriate provisions to achieve this, and to allow the customer to continue to comply with applicable legal, regulatory and contractual obligations.

Representations and warranties

As with other contracts for procuring or selling technology, the agreement or terms for an AI solution will need to contain appropriate representations and warranties. A customer can reasonably expect that a vendor will warrant that the AI solution and its outputs to comply with applicable laws, including privacy laws.

Similarly, a warranty that the AI solution and its use by the customer does not infringe the IP rights of any third party is often sought (particularly in the absence of a third-party IP infringement indemnity from the vendor). A ‘belt and braces’ approach would also require an IP ownership warranty in relation to the AI solution (or a warranty that the vendor has the right to license it to the customer).

To consider the differences between AI and more traditional technology, the customer may also seek a warranty from the vendor as to how the AI solution was trained, and where the data used in that training came from. In addition, if the AI solution or tool relies on a third-party’s technology, customers will likely ask vendors to represent and warrant that they have all required licence rights to use the relevant third party’s technology and that they will comply with all use restrictions under that third party license.

Finally, a vendor can reasonably require the customer to warrant that the AI solution will only be used in accordance with the agreement and the purposes/use cases described in it.

Service levels

Just because AI is a new and developing technology does not mean that the relevant solution or tool should not be subject to certain service levels or performance standards. These may include the ‘usual suspects’ around availability and uptime, or more solution-specific service levels, for example around transaction, response or processing time.

Unless the AI solution or tool is being provided in ‘preview’ or ‘beta’ mode, the provider will often be prepare to offer assurances as to minimum performance standards for it. Conversely, a customer should always consider if the offered service levels or performance standards are suitable and sufficient, particularly where the AI is being used for a critical business purpose.

Limitation of liability and indemnities

As with other forms of technology, with an AI solution it is not unusual for the actions of one party to cause loss to the other through its acts or omissions. The contract itself can be a vehicle for fairly and reasonably apportioning that risk of loss. Usually, that’s achieved through (amongst other things) a combination of limitation and exclusion of liability clauses, and indemnities.

Given their importance, the parties can, and probably will, spend a lot of time negotiating and agreeing these provisions, and they are often some of the final contractual points to be resolved.

However, ultimately the parties will need to reach an agreement on liability and indemnities which they can both live with, if they’re to get the deal for the AI solution over the line.

Insurance

Many commercial contracts, including technology agreements, contain clauses requiring either or both parties to have and maintain various types of insurance. This is an important protection against a party being unable to meet the financial liabilities that it may incur towards the other party.

In an AI context, just like any other technology solution, there are multiple ways in which the AI tool (or its vendor), or the customer, can cause loss, and that loss can be significant. If there is any likelihood of a party not being able to pay any claims made by the other party (for example, where the AI vendor is a start-up, or the customer’s financial position is insecure), then obligations to obtain appropriate insurance cover (e.g. professional indemnity insurance, workers compensation insurance etc) should be included.

The types and levels of required insurance cover will depend on the circumstances of the transaction and the nature and size of the parties.

Conclusion

Contracts for AI solutions have many similarities to traditional technology agreements, but also a number of important differences. An organisation needs to understand the unique issues and risks that come with buying or selling an AI solution, and develop a strategy for reaching agreement on the key legal issues that will arise during the contract negotiation.

Our experienced Technology Law team can assist customers and vendors with tailored advice and strategies to navigate the complexities of agreements for AI solutions. Please get in touch with Nick Martin to discuss how we can support your business.