Schalast | The Duty of Executive Bodies When Employing AI

1. Legal foundations under corporation law

Executive (management-level) AI use needs careful evaluation before being implemented. To guarantee dutiful administration of the company and, ultimately, prevent liability on the part of the management bodies, this, like the delegation of work to employees, is subject to specific preconditions.

1.1. Company management

A company’s executive body (especially its executive board or management) is responsible for running day-to-day operations and serving as its representative to third parties. Management includes responsibilities specifically mandated by law, such as drafting and implementing resolutions passed at the annual shareholders’ meeting, meeting reporting requirements, and drawing up financial statements. The management bodies also have the responsibility of making basic organizational and risk choices in addition to corporate management tasks and the formulation of company policy.

1.2. Delegation

As a result of its organizational duty, the executive body may assign specific responsibilities within the realm of management to other bodies, subordinate bodies, or external parties. When it comes to matters of paramount significance to the business, however (“management tasks”), the management body must retain final say at all times. To a considerable part, the unique instance determines the actual demarcation between management and executive responsibilities, and hence the ability and amount of delegation. Considerations including corporate size, organizational structure, and the difficulty of making decisions are all important factors. It is vital that the executive body must exert the required care in choosing, monitoring, and instructing the delegate(s). Where delegation is allowed, the initial management obligation is converted into a defined function for the organization.

If AI systems are now deployed at management level, this deployment must be considered a delegation of tasks. Autonomous AI systems will typically be utilized for high-stakes, high-impact jobs within an organization, posing a risk similar to that posed by human delegates. Accordingly, both deployments (delegations) must be subject to the same (legal) restrictions.

1.3. Liability norms for executive duties

According to Section 93(1) Stock Corporation Act, the primary legal standard governing management board responsibilities, the members of the management board “shall exercise the due care and diligence of a prudent manager faithfully complying with the relevant duties” in their management operations. This situation necessitates the establishment of a standard of care, which should be determined with reference to the nature and scope of the relevant business. Nevertheless, Section 93(1) Stock Corporation Act includes a broad phrase for responsibilities of conduct, from which specific obligations may be developed as necessary based on the nature of the company and the specific circumstances. Members of the management board, when viewed in the broadest sense, should have the necessary skills and knowledge to protect the firm from harm, develop and protect its competitive edge, and advance its overarching mission. Management of a limited liability company or an entrepreneurial company (with limited liability) is governed by similar provisions in Section 43 Limited Liability Companies Act.

Managers are trusted with autonomy to engage in business-related endeavors and make important choices for the company. Nonetheless, members of the management board can avoid personal liability for their actions in some discretionary situations by using the business judgment rule (e.g., Section 93(1) sentence 2 Stock Corporation Act). Members of the management board are given unrestricted discretion if they can “reasonably assume that they were acting on the basis of adequate information and in the best interests of the company,” with the exception of bound decisions that affect the duty of legality. The latter stipulates that the executive board must act in accordance with all applicable laws. Specifically, the business judgment rule (BJR) only applies if the law does not establish any precise criteria. This BJR is applicable at the limited liability corporation level by analogy.

The BJR relies heavily on a solid foundation of sufficient data. To put it simply, there has to be enough data for a well-informed choice to be taken. Once again, the appropriateness is determined on a case-by-case basis; it largely boils down to determining an “appropriate” ratio between the knowledge at hand, the potential for benefit, and the degree of risk involved.

2. Fundamentals for implementing AI systems in a managerial setting

2.1 Will AI eventually replace human board members?

More and more cutting-edge AI systems are appearing, allowing for a wide variety of applications within a business setting. Even though ChatGPT is the most well-known AI application, the list of tools is much more comprehensive. It is in light of these advancements that the issue of whether or not managerial roles in the organization can be (entirely) replaced by AI systems in the future emerges.

To answer this question, we need to take a look at the current legal situation, which states that only a natural person with unlimited legal capacity may be appointed member of the management board or general manager of a company (cf., for example, Section 76(3) sentence 1 Stock Corporation Act and Section 6(2) sentence 1 Limited Liability Companies Act). A natural person must also serve as a partnership’s management body, even if the shareholders are legal entities. The existing legal climate does not recognize AI systems as legal entities; hence they cannot have rights or responsibilities of their own. To what extent a much-debated “e-person” will play a role in future constellations is yet unknown. One thing that is certain is that a natural person can only serve as a board member or general manager at this time.

2.2 Decision-making and advisory AI systems

Therefore, the existing legal climate does not permit the employment of an AI system that performs managerial functions. However, AI systems that assist in corporate management generally accepted.

The contrast between decision-preparing (advisory) and decision-making systems is important if the governing body utilizes an AI system to aid in its many judgments.

a) Decision-making AI systems

Fully automated, capable of making decisions on its own, and carrying them out without human oversight, these are the hallmarks of a decision-making AI system. As a result, this type of AI “acts” on its own rather than simply assisting humans in making choices. When the end objective has been established by humans and the AI has some discretion in how to get there, we are also talking about decision-making AI.

Delegating decision-making authority to AI systems raises questions in light of the need for (human) management bodies to have last say in matters according to business law. In light of the ban on delegating authority for choices involving major consequences for a corporation, the executive body would have to take precautions to prevent autonomous management decisions being made by AI systems. It has to be able to dispute the AI’s conclusions and make decisions based on data that it has not automatically gathered. First and foremost, it has to set parameters for the AI to function properly.

For the time being, it does not appear that the use of an AI system to make decisions would be unlawful (notice that management decision-making AI would likely not be legal). However, there are risks associated with using such an autonomous system, including the possibility of the executive body overstepping its bounds in terms of making final decisions.

b) Decision-preparing, advisory AI systems

Advisory systems that help people make decisions include:

      • Information-gathering AI that “only” gathers data. Some information (forecasts, for example) is either derived on the basis of data or filtered out of several data sets.
      • Advisory AI that can offer suggestions but not carry them out on its own. However, the system often does not provide its underlying information, making it impossible to track where the data came from.

Generally speaking, the use of AI advisories is permissible. There are rules, however, that must be followed to guarantee “safe” operation. In any event, the executive body has to understand the system and how it works, or else it should seek outside help from specialists. It should also be made clear that the AI system’s suggestions are not binding in any way because of its advisory role. Rather, it may be necessary to gather more information from other sources to arrive at a sound conclusion. While AI systems have the potential to acquire and interpret an almost infinite amount of information, distinguishing between reputable and non-reliable sources of information and selecting based on this seems to still be a significant difficulty for some. The executive body must always be able to make the ultimate call.

From a legal perspective, it might be problematic when AI systems’ advising outcomes cannot be traced back to their original source; after all, a corresponding disclosure of the algorithms requires substantial expertise in the field. Therefore, it seems implausible for the time being that an AI-supported advisory conclusion could be the only source of information for a well-founded choice by the executive body; instead, it should always be used as a complement.

c) Excursus: Use of AI obligatory?

A corporate manager’s duty of care, as well as the general duty of management, impose on the executive body the responsibility of ensuring the organization is well-structured. Although special responsibilities to employ certain technology may exist depending on the kind of company, in general, the executive bodies have full discretion and latitude in this respect. However, the question arises with regards to the BJR as to whether or not executive bodies may be mandated to utilize AI systems. Some current AI systems are undeniably better suited for certain data processing than the antiquated archival methods, and members of management bodies have discretionary power free of liability if they could “reasonably assume that they were acting on the basis of adequate information and in the best interests of the company”when making a business decision.

However, on this premise, it is not (yet) possible to justify a mandatory implementation of AI systems. The numerous AI systems are costly, have several applications, and carry some degree of risk; as a result, their use is ultimately up to the discretion of the executive bodies, who may exercise their authority to forbid such use. It will be important, though, to keep an eye on advancements in the future; in the case that AI systems become more reliable, it may become mandatory to employ them.

3. Creation of AI-specific duties and liability

It is necessary to resort to legal interpretation and deduction based on the existing legal situation to answer the question of the obligations of executive bodies and the accompanying liability provisions, as there are no AI-specific legal rules on the topic. At least in the unregulated sector, the precise nature of duties varies depending on the specifics of each instance.

Traditional delegation entails management selecting an employee and giving them specific instructions. Since relying on AI is also a form of delegation, the same logic holds here. The real responsibilities of the executive board become organizational responsibilities in the case of traditional delegation. The duties under tort law (Sections 823, 831 Civil Code) can be used as a starting point for this specification, but the fundamental principle that the person who produces a source of danger must take efforts to control associated risks is the most important consideration. The executive body is responsible for a number of tasks, including selection, instruction, and oversight. It must provide a mechanism for preventing, detecting, and correcting errors and guarantee the competence of the delegate (for example, the AI system).

3.1 Duty of selection

The executive body must first appoint an appropriate and capable delegate. The duty of care increases in proportion to the gravity of the work at hand and any associated hazards. In practice, this implies that the executive body must pick an AI that is well-suited to the job at hand. The decision of whether to build or buy an AI is also included in this category. In the absence of any existing certification criteria, such an evaluation is currently impossible. To assure the appropriateness, the executive body must find information independently through various means, such as vendor information.

3.2. Duty of instruction

Despite the fact that AI systems cannot be taught in the traditional sense, they nonetheless need to be provided with a sufficient competency foundation. In other words, the executive body needs to make sure that the AI is properly programmed and trained. The selection and availability of appropriate data material are also part of this duty. Both quantity and quality of data must be considered. However, it becomes more challenging to monitor and assure acceptable quality as data volumes increase, despite the fact that this translates to more accurate outcomes. This means that with “Big Data,” the data utilized will or cannot be thoroughly examined, which introduces a substantial error rate into the AI system’s predictions. As a result, it might be difficult to train an AI system with both quantity and quality in mind.

3.3. Duty of monitoring

For this and other reasons, the executive body has a duty to thoroughly monitor the AI system’s training phase to make sure the “instruction” is as secure and productive as feasible. The AI is put through its paces to ensure it reaches and maintains full functionality. To maintain the AI system’s functionality and stability after deployment, the executive body must also keep an eye on it. This will be achieved by continuous monitoring and adjusting as needed as well as random sampling. When suspicions develop, for instance, warning systems can be put in place to react accordingly. This, of course, also entails making necessary changes to the algorithm in light of new information or changes in the law.

4. Conclusion

Although there is now no AI-specific legislation in place, it is feasible to depend on the existing legal status of the principles of delegation when appreciating the use of AI in terms of delegation; nonetheless, it is also important to take into consideration the particulars of the systems and the individual case.

In conclusion, the executive body has the duty to guarantee that it retains ultimate power over all choices and that decision-making AI systems do not take over management decisions. Careful selection, instruction, and monitoring duties apply to the executive bodies employing AI as well.