Contracting Considerations for AI Solutions
What lawyers should know about contracts for services that involve artificial intelligence

It is almost impossible to avoid companies extolling the virtues of artificial intelligence: greater productivity, speed, efficiency, stays crunchy in milk! As clients look to implement AI tools in their businesses, they are going to look to their lawyers for advice. With all the promises, lawyers need to ensure their clients properly understand some key factors to make informed decisions about using AI.
From a contractual perspective, AI contractors are similar to software contracts. Most of the issues in an AI deal are similar to those in software deals: indemnities, warranties, IP ownership, and data use. The use of AI, what AI can create, and how AI is developed creates additional risks that parties will look to minimize in the contract.
Warranties
Since AI relies on data to provide outputs, users will want assurances that the vendor has the appropriate rights to use the data it used to train the AI tool. For example, if the AI tool is designed to be used with personal information, a buyer would want to be sure that the vendor had the appropriate consents to use other people’s personal information to train the tool.
AI use also brings the risks of discrimination and other biases. Studies show that women can face more discrimination when AI is used in lending decisions1. As laws and regulations continue to develop, vendors may eventually disclose their systems’ known limitations, including frequency, inaccuracy, or defamation, and warrant that the disclosure is full and will be updated.
One of the other key risks of AI tools is hallucinations — an AI model generated output that is inaccurate, false, or misleading, despite2 appearing plausible or confident. When reviewing warranties, it is important to keep in mind what promises the AI vendor is making, and whether those warranties provide the protections a user wants to minimize the risk of discrimination claims or hallucinations.
Indemnity
AI users are concerned that their use of the generative AI (GenAI) outputs will be the subject of an intellectual property (IP) infringement claim. A number of the large language model AI vendors now stand behind their solutions and give an indemnity for this, which gives users greater comfort. Like most technology contracts, an indemnity against third-party claims of IP infringement will be a focus of the contract. Similar to technology contracts, it may be the only indemnity a vendor gives, along with exceptions to that indemnity. Some of those exceptions include:
- When the customer knew or should have known the output infringed or was likely to infringe;
- When the customer did not use certain features provided by the vendor; and
- When the output was modified, transformed, or used in combination with products or services not provided by the vendor.
These exceptions are similar to the ones provided for most IP infringement clauses.
Depending on an AI system’s use, a user could face severe negative consequences if the AI system results in biased or discriminatory output. Individuals who feel that a decision made against them was discriminatory, and a violation of their human rights, could seek legal recourse against the AI user. Depending on the AI system, a user may want to consider indemnification from the AI vendor. The vendor will have more insight into how the AI system was built, and the data used to train it, than the user would.
Output Ownership
Unlike traditional deliverables, GenAI outputs’ ownership is not as clear cut. Courts have said that only humans can own a copyright in any IP3. In Canada, there has to be human involvement in creating the GenAI, which can include prompts, for an owner to be registered. Without any kind of human involvement, there is no ownership, and no rights for the AI vendor to assign to the user. From the vendor’s perspective, a statement that it will make no claims to the output is more appropriate, since it does not put the vendor in a position of agreeing to do something — assign ownership — that it cannot do.
The challenge users face with such a statement is that the AI system can generate the same output if another user uses similar prompts. If a user wants to create a unique GenAI output, that user may be disappointed if it finds another user, who also wanted to create a unique GenAI output, has the same GenAI output.
Data Use
The training data an AI system uses is a contract clause that poses challenges to both sides.
AI vendors want access to as much data as possible to train the models and improve them with each iteration. If users are not aware of what the data use clauses say, a user may inadvertently give the AI vendor access to its confidential or sensitive information. Even if the vendor explicitly says it will not use the user’s inputs or outputs, there may be other types of data the AI vendor may be able to use.
Like any contract, the limitation of liability will also be a key negotiating point. For the user, the value of the limitation of liability will be based on what it applies to. If the AI vendor does not provide warranties or indemnities for the user’s biggest risks, a high limitation of liability would be a hallucination — the user may be able to recover high monetary damages, but if the AI vendor is disclaiming most liabilities, there are very few claims the user may be able to make.
For clients wanting to use AI, the best advice may be to proceed with caution, and ensure those clients really understand what the protections (or lack thereof) are in the contracts. AI is improving every day. Depending on what you are using AI for, some of those risks may have serious implications for your organization and its reputation. With the regulatory regime and jurisprudence racing to catch up, it may be better to let the dust settle to get a better picture of the risks that you may not be able to contract away.
Jacob wrote this entire article on his own. He did not use any AI solution to help. Except the last part of this by-line. ChatGPT gave it to him.
- “Optimizing Pricing Delegation to External Sales Forces via Commissions: An Empirical Investigation” Amaral, Christopher et al., Production and Operations Management Journal, vol. 33, issue 9.
- There have been a number of instances where lawyers have faced consequences when relying on hallucinated cases.
- Naruto v. Slater, No. 16-15469 (9th Cir. 2018).