1 min read
Contracting AI services
27 May 2024

Purchasing AI differs from purchasing more traditional IT. One challenge is that the customer usually does not know how the technology works. The person who designs the technology often does not know how the customer will use (or misuse) the technology. Even if it is conceptually known how the technology works, surprises can still arise due to the data used to train the AI system. Each of these challenges entail risks. When facilitating the procurement of AI services, it helps to be aware of these issues.

To determine which specific points need more attention, the answers to the following questions will help:

  • For what purpose will the AI system be used? What are we going to trust it for?
  • What are we trying to achieve? What is the goal?
  • To what extent can and will our people check the outcomes before they are applied? Can we adjust along the way?
  • What can go wrong?
  • To what extent has the AI system already proven itself in the same or a similar environment?
  • To what extent do you as a customer determine the configuration of the AI system, for example by training it in your own environment or with your own data sets?

The more the supplier understands the customer’s specific use, the more you can rely on the supplier. If you determine the specific use yourself, you must also take more responsibility. This means, among other things, that you will have to test more before you apply it in a production environment. From the customer’s perspective, it is logical to want to understand how the technology works. In the context of the nascent stage of the AI technology, this is not a sign of distrust towards the supplier, but rather a sign of the customer taking responsibility.

In this contribution, we discuss some advice for contracting AI from a customer’s perspective. First, we draw inspiration from the ideas behind the AI Act. Secondly, we apply our experience with SaaS agreements. Depending on the specific use by the customer, additional terms may be necessary.

Practical framework: AI Act

The upcoming AI Act contains rules for the use of AI. Depending on the intended use, the AI system falls into a risk category: unacceptable risk, high risk, limited risk, or minimal risk. A number of AI applications, such as social scoring, are prohibited. The AI Act mainly contains rules for high-risk applications. These include applications in already regulated products (such as machines, medical devices, and elevators) or applications in processes that can have a significant impact, such as recruitment or creditworthiness checks (more applications are mentioned in an appendix to the AI Act). For a number of AI applications, there are transparency obligations. For someone who communicates directly with an AI system, it must be clear, for example, that they are communicating with an AI system instead of a human. For AI with minimal risk, there are no rules.

Most obligations for high-risk AI rest with the provider of the AI system. This is the person who develops the AI system and places it on the market or puts it into service. The AI Act also recognizes the role of the deployer. This is the person who purchases the AI system unless they substantially modify the AI system—then they qualify as a provider. The deployer also has obligations, such as using the AI system according to the instructions, exercising human oversight, using relevant and representative input data, and preserving logs for a period of at least six months.

As long as the AI Regulation is not yet applicable and also partly for AI systems that do not fall into the high-risk category, the requirements of the AI Act can serve as inspiration.

 

Risk management system

For example, the provider of the AI system must set up a risk management system. This means that the provider must think about the risks that deploying AI entails and mitigate those risks. Another obligation is to test in light of the intended use of the AI system. These are, of course, sensible considerations even before the AI Act applies.

Data and documentation

Furthermore, the data governance must be in order. The dataset for training, validating and testing the AI system must be sufficiently relevant, representative, and as far as possible error-free and complete. The provider must keep track of how these requirements have been met in the technical documentation. These requirements are also relevant outside high-risk applications. Even if you purchase AI for inventory management or optimizing offered prices on a website, you want the AI system to function properly. This is largely determined by the quality of the data used for training that AI system.

 

Record-keeping

According to the act, the AI system must technically enable automatic logging of what happens so that it is clear that the AI system is functioning as intended. Keeping such logs helps explain exactly what happened. This seems relevant to us for many applications, whether high risk or not.

 

Transparency and human oversight

The AI system must be transparent enough so that the person deploying the AI system can interpret and use the output of the system. The provider must also provide instructions for use. In high-risk applications, effective human oversight is required during the use of the AI system. This human oversight can prevent or limit risks: by understanding the capabilities and limitations of the AI system, by being aware of the tendency to automatically trust the output, by correctly interpreting the output, and by choosing not to deploy or stop the system.

 

Accuracy, robustness, and cybersecurity

Finally, the AI system must be accurate and robust and meet the requirements of cybersecurity. The AI system must, for example, be resistant to errors, failures, and inconsistencies, especially as a result of interaction with natural persons or other systems. The provider must also think about backup or fail-safe plans.

 

AI Act in the agreement

If a customer purchases high-risk AI, it is sufficient to determine in the agreement that the AI system of the provider complies with the legal requirements and is designed in such a way that the customer can comply with the law when using it. When purchasing non-high-risk AI, it may be wise to regulate the above topics in the agreement, depending on the answers to the questions at the top of this article.

 

SaaS Provisions

Suppliers often provide AI systems as a SaaS solution (via the internet instead of on-premise). The same points of attention that apply to SaaS agreements then apply, such as availability, functioning according to documentation, resolving incidents, being able to purchase additional and exchange licenses, renewal, not excluding too much supplementary law (e.g. implied warranties) with respect to quality (so as to e.g. retain suitability for agreed/documented use), security and confidentiality, backups and responsibility for loss of data, liability, and whether particular remedies are sole and exclusive. For a more extensive discussion of these topics, we refer to the (Dutch) article ‘Het contracteren van cloudcomputing (Saas)-oplossingen’(Contracteren 2022/3, pp. 93-102).

 

Usage restrictions

Contracts often state that the software may only be used for internal business use. This is vague and possibly  too restrictive for most applications, since software is often not just for internal use. When using an AI system to generate the right sales texts, the intention is that the output is used externally. Communicating with people outside the organization is essential to sell something. To avoid a licensing dispute, we therefore suggest clarifying that “internal use” includes the intended use.

In terms for AI services, it is sometimes stated that developing new products or services is not allowed. However, you may want to use the AI service to create new products or improve existing ones, for example, when using AI to develop code. The provider is probably mainly concerned about you not developing competing services. In that case, they would not mind if you develop entirely different new products and services.

 

Output

The AI system provides a output. Who is allowed to use that output? Only the customer or also the provider? May the provider use the input/output of the customer to train the system? Whether this is desirable depends on what the customer uses the AI system for and what data the customer submits into the system. If that includes confidential data, it is more important to protect the data than if it involves aggregated data that is not traceable to individuals or the organization.

To what extent is the provider responsible for the output of the system or the use made of it? We propose that depends on the context. A first step is to determine that the customer does not infringe third parties’ rights by using the AI system, because the AI system and the training material used by the provider do not infringe such rights (assuming its own input does not infringe). If the use by the customer aligns with the use as conceived by the provider, and if this use is under the control of the provider, the provider can reasonably bear much responsibility. Conversely, as you as a customer have more influence on how the technology works (for example through your own configuration or training on your own dataset), you should accept more responsibility for the output and its use. That will probably require a bigger role in testing the system and assessing the output.

 

Explanation

The use of an AI system can cause damage. To determine how certain damage has occurred, information from the provider may be required. The provider may not always want to offer this transparency for fear of being held liable by the customer. It is useful to determine in advance that you as a customer have certain access, for example to the logs of the AI system, to find out what exactly happened (also with non-high-risk AI). You may also consider a provision that places the burden of proof on the provider (or an indemnity for claims by third parties) if the provider does not provide sufficient transparency to determine why the damage exactly occurred.

 

Incidents

An obligation to monitor the output of the AI system and to notify the customer if the AI system is not functioning as it should, for example by generating undesirable output, contributes to mitigating such risks.

 

Representation

A representation is an Anglo-Saxon figure, which does not have the same meaning in Dutch contract law. Many terms of AI services include such a clause, which usually limits what you can expect from the software. A limited representation is a limited promise and no more than that. You could use this clause to clarify what you may expect as a customer:

‘Supplier represents that the Application:

  • has been developed and will perform in a way that is in compliance with laws and regulations;
  • has been developed according to a motivated approach designed to prevent unjust biases;
  • will perform accurately and correctly;
  • is suitable for the Documented Use.’

 

Audit

It may be helpful to stipulate that you as a customer have a right to audit, also to fulfill your own responsibility to account for how certain decisions were made.

 

Continuity

In most cases, it is inconvenient or even problematic if the use of an application suddenly must be stopped. With AI, the results probably need to remain usable, and in some cases, you also need to be able to explain decisions later. This requires a transition period after termination, a continuous right of use for the output, and perhaps even a limited but ongoing access to the AI system. Where relevant, it is wise to cover this when entering into the agreement.

Back
Contracting AI services