Schalast | Liability
Industrial (for example, manufacturing) operations are supported by AI systems, and sophisticated data (such as X-rays or microscopic images) are analyzed and controlled by them. These systems act as “intelligent assistants,” monitoring a wide range of operations, evaluating data, identifying risk and error trends, communicating with other systems, and answering questions to help humans make choices.
On the one hand, AI aids people in complicated operations in many fields and can help to eliminate human mistakes. However, when AI develops to the point where it can make judgments largely, or perhaps entirely, independently, additional liability concerns arise. Because where AI is monitoring, major mistakes may slip through the cracks. When AI intelligence does the evaluating and “decision-making,” the result may not be correct from a human perspective.
This can also apply to the use of “weak” AI systems (referred to as methodical AI), as they are being utilized and will presumably also be used in the medium term. Questions about who is liable for what in the creation, maintenance, and use of AI systems are becoming increasingly pressing in light of this development.
Legal liability for harm caused by AI systems
Fundamentally, only the individual who has done something wrong carelessly (or even purposefully) is accountable under German liability law. Human “fault” or “culpable conduct” is typically required before legal responsibility may be assigned. The dilemma of whether and who is to be held liable in the case of damage emerges in contexts where AI systems increasingly behave “autonomously,” i.e., when human decision-making is no longer anticipated, or where humans increasingly rely on the “decision” of the AI. This leads to the following questions:
- Did the AI really make the choice “autonomously,” that is, without any human input?
- Is it possible that the maker accidentally “trained” their AI the wrong way?
- To what extent can problems with the software that runs “around” the AI core be to blame?
- Can the AI app’s provider be held liable for misleading or inadequate warnings about potential risks to users?
- Is it possible that the AI system was mishandled by the user?
Such issues may decide who is liable for damage caused “by” an AI system—the maker or operator of the AI system, the user, or no one at all if there is no human misconduct to attach liability to.
The “black box effect” refers to the difficulty in proving that human misconduct occurred during production or operation. This is why the EU Commission proposed the AI Liability Directive on September 28, 2022, to establish new rules for compensation for harms resulting from the use of AI. As a result, persons harmed by AI systems should not have to prove negligence in court in non-contractual responsibility circumstances. Additionally, evidence is to be made more accessible. However, the liability and accountability of and for AI system(s) may potentially be affected by other or future legislative projects.
Product liability and producer liability
A producer is liable even without fault if a product placed on the market by the producer is defective which, in its effects, causes the death of a person, bodily injury to a person, or damage to another object intended for private use or consumption within the scope of application of the German Product Liability Act based on the Product Liability Directive 85/374/EEC. Defects attributable to design, production, or documentation can all result in legal consequences. Damage to the defective product itself, or purely monetary damage that are not the direct result of the infringement of legal rights, are not covered by product liability.
Along with the proposed AI Liability Directive, the European Commission released a draft modification to the Product Liability Directive (COM/2022/495) on September 28, 2022, which is important to the evolution of product liability in the AI area. The goal of this Directive is to make it clear that software falls under the new Product Liability Directive. This indicates that in the future, AI systems will also typically fall under the concept of a product as established by the statutory provisions. At the same time, the class of claimants is to expand to include authorized manufacturer representatives and fulfillment service providers.
Producer liability is another factor that might be taken into account. According to Section 823 Civil Code, this is a special kind of tort liability. If the manufacturer (after placing its product on the market) acted with culpable disregard for a duty of care and that breach caused an infringement of a legal asset protected by Section 823 Civil Code, the manufacturer would be liable for the infringement. In particular, the neglect of design, instruction or product monitoring duties may be considered.
Therefore, even after releasing a product to the public, AI system manufacturers must take all necessary precautions to ensure the safety of their creation. Where AI features are included into a product, the same principle applies.
Consequences for the development and operation of AI systems
The liability risk that tends to emanate from (some) AI systems, as well as the associated legal developments, indicate that developers and operators of AI applications should analyze the (possible) damage potential of their product already during the design phase to minimize any liability risks. Liability-inducing machine judgments may be influenced or even controlled by humans if designed accordingly. This may even be required in some settings (such as hospitals) where safety is paramount. Similarly, the AI program may be required to automatically report particular parameters or error signals to a governing authority for reasons both technological and legal. To limit the risk of producer or product liability, it is frequently required to maintain a thorough quality assurance that also takes into consideration the AI’s “autonomous” activities.
At the contractual level, liability risks can also be mitigated. In the realm of machine control, for instance, the AI application’s owner might mandate that particular operations be checked frequently. Of course, the boundaries of German law governing general terms and conditions must be obeyed when agreeing on topics such as time restrictions on the provision of AI services or limitations of liability.
Schalast Law | Tax provides guidance and support throughout the whole process of creating, marketing, and selling your AI tools, including under ll aspects pertaining to liability law. We achieve this by keeping a close check on the market success of your AI products and providing you with comprehensive or spot-on assistance, as needed.
Responsibilities and liability when using AI tools
Legal liability concerns have been raised about the use of AI systems. Road accidents involving autonomous cars are a prominent but extreme illustration of this topic. Liability concerns and associated duties need to be considered not just in the context of the business or operational use of AI technologies.
Personal data processing is just one area where the application of AI might lead to legal implications. As a result, businesses generally ensure they are in compliance with data protection regulations prior to implementing AI. This is especially true for the handling of any employee data. The “output” of AI tools can potentially also cause problems with user liability. In this context, for instance, the question of whose property the AI’s output (e.g., words or photos) actually is may be an emerging issue. Ignoring the relevant rights might result in the rights holder seeking compensation, as in the case of (copyright) infringements and associated claims for damages.
To the extent that AI systems are used to manage or monitor equipment or activities (such as production lines), AI users may have a responsibility to do the same with respect to the AI system. As a result, user-side maintenance or control mechanisms may be necessary and might be pivotal in protecting against charges of carelessness.
We counsel businesses on how to make ethical and lawful use of AI technology while avoiding or mitigating potential legal repercussions.