Schalast | Contract Drafting

The legal design of distribution and usage agreements for AI platforms and tools is influenced by the technological and content-related features of AI technology and the ensuing AI applications.

Contract drafting for AI services

A cautious eye is kept on the details of the relevant AI technology, but generally speaking, the creation of user agreements for AI platforms and tools follows the principles demonstrated, among others, for internet platforms and Big Data applications. For example, a clear description of the permissible actions of usage and their boundaries may be necessary depending on the nature and (performance) purpose of the AI tool. This may be essential in light of the possibility of the AI tool being misused and the liability issues associated with AI use. High-quality contract drafting can already have a liability-reducing effect at this stage, from the operator’s perspective.

The need to govern rights of use of the results generated by the AI program becomes apparent when the tool is able to produce tangible results (“output”), such as words, graphical representations, or photographs. The importance of regulating this problem in a way that is consistent with the interests of providers and users of AI tools grows as the value of the output of AI tools rises.

AI operators frequently obtain data and information on user behavior over the course of use. In addition, they may analyze the data that users have provided to the AI. Contractual control of this kind of data and information use may also be necessary. On the one hand, AI operators typically have a substantial interest (or perhaps a compelling necessity) in such use, for instance, to optimize and further improve their AI tools. However, it is important to make sure that the (potentially internal) use of the data and information by the providers does not disadvantage the user firms.

Limits established by German legislation must be taken into account when creating commercial terms and conditions for AI tools. Approaching legislative projects should also be taken into consideration early on to reduce the need for frequent adjustments to the contracts.

The careful drafting of contracts required for operation and distribution by suppliers and operators of AI tools can be a vital key to the commercial success of AI applications.

Using AI to conclude contracts?

Email and other digital channels have long been used to seal legal agreements. While such statements may be made in the name of a company, they are always made by actual persons. The same holds true for “automated” statements of intent; for instance, if an online shop “automatically” sends an email to the consumer confirming an order, this declaration is genuinely created and sent independently by the system. Of course, this occurs precisely in accordance with the parameters programmed and so specified to the system — and thus, eventually again, by a human. Such computer-generated statements of intent are usually binding on all parties involved.

However, in the “age of AI,” it is becoming increasingly plausible that declarations made by the AI to conclude contracts will be made by the AI on the basis of an autonomously running decision-making process – and thus by the machine, without this having been directly specified by a human by way of programming.

The design and deployment of AI tools are increasingly impacted by the question of whether or not such “machine declarations” may still be ascribed to a legal entity, such as a human or a company.

      • How far may AI “autonomously” make legally binding assertions like making or accepting an offer?
      • Who is liable if an unwanted (“erroneous”) outcome is brought about by autonomous declarations?
      • To whom does a legally enforceable “machine declaration” of AI have to be attributed?
      • What methods exist for documenting and proving the submission of a legally relevant machine declaration?

As early in the process as feasible, developers and providers of appropriate AI tools should ponder such concerns and come up with innovative answers where required. It is important to understand and conceptually account for any accountability difficulties and liability hazards that may arise due to the “digital communication” of the AI. As a result, the AI’s users may feel compelled to intervene in some way in the AI’s communication patterns or may seek for means of doing so inside the AI infrastructure. In particular, the AI tool’s potentially risky communication procedures may need to be controlled, if necessary, and archived for further evidence.

Contractual laws may also be necessary in cases where AI applications connect with third parties in the business world. The user of an AI system, for instance, may establish in a contract with its partners (such as the supplier) the circumstances under which a “declaration” made by the AI is to be deemed to be legally binding between the parties.

There are many potential avenues for design and solutions, but thus far they have not been implemented nearly enough. Providers and user firms still have room for improvement in this area for future AI applications.