Schalast | Data Protection

AI relies heavily on the precise application of data representing a variety of information sources. AI models are extensively supplied with vast volumes of data. Self-learning AI systems acquire and accumulate data continually and independently. Users of AI systems apply this data in the course of usage, directly or indirectly, and thereby produce new data and (other) information.

Data of various forms and origins is collected, fed in and used, typically (but not solely) from public sources. In many circumstances, this also includes personal data as well as data that can (re-)acquire personal reference through aggregation or linkage. Consequently, the “output” of AI models and applications may potentially contain personal data.

The combining of data by way of AI can also give rise to new data (collections) and information, which can also be of value and hence perhaps be monetized. Companies that specifically apply AI as users might potentially create values that require to be protected.

All of this gives rise to both legal questions and obstacles in need of answers for producers, providers, and consumers of AI technologies.

Privacy-compliant design and use of AI technology

On the one hand, AI models are also trained utilizing personal data. Consequently, AI systems also output learned personal data during usage. On the other hand, further personal data can be originally obtained through the use of AI applications – e.g., through the analysis of (online) user behavior, in customer support, or internally within the company when using virtual assistants, as well as in the context of internal company analyses using AI-based software tools.

Developers, suppliers, and organizations as users of AI systems are consequently expected to comply with the requirements of statutory (personal) data protection in their areas of responsibility:

The establishment or market location concept of Article 3(1), (2) European General Data Protection Regulation (GDPR) also applies to the operation and offering of AI applications. Thus, the GDPR will typically apply if the operator’s registered office is located inside the EU or if the appropriate AI tools are (also) oriented at individuals in the EU (or if their conduct is observed by these AI tools). The same applies in theory to corporations as users of AI systems.

Depending on the constellation, however, additional (or other) data protection-relevant regulations may be applicable. In addition, existing legislative projects and new legal requirements, such as the Artificial Intelligence Act, may already have an influence or may have an impact in the future on the design, offering, and usage of AI applications.

Providers are thus strongly advised to build their AI systems and applications as early as the product development stage in such a way that they may be operated and used in compliance with the requirements of data protection regulations. After all, organizations are obligated to evaluate whether their AI services may be used in conformity with data protection. An AI application that does not take sufficient account of this requirement will typically not be able to remain on the market in the long run.

In the scope of application of the GDPR, the data protection principle of “privacy by design” plays an important role: Anyone who also processes personal data by means of AI software will (have to) provide the data subjects with a regulation and objection option for the collection and use of their personal data via consent management or a comparable technology. If employee data is additionally gathered and processed within the data-using company by means of the AI application, the requirements of employee data protection and, where applicable, the right of co-determination of the works council must also be respected.

Developers, providers, and operators of AI tools should consequently offer users with the best feasible technological solutions for accomplishing these criteria. Otherwise, the AI application may not be useable in a legally permissible manner (without additional qualifications).

Schalast Law | Tax provides data protection-compliant solutions for your AI applications in such a way that data protection is not an impediment, but a quality aspect of your AI products. With our legal expertise, we help users to deploy AI applications in their organizations in a legally acceptable way and to use them in compliance with legal and data protection regulations.

KI & BIG DATA

In the sphere of AI applications, too, it is becoming increasingly vital to address the diverse types of data when creating Big Data services and functionalities. This is because the processing of, for example, usage and transaction data, location-based and/or social demographic data, or data from the medical sector is subject to numerous – and distinct – statutory obligations.

Here, too, it is vital to take these requirements into consideration carefully and at an early stage and to implement them in a legally compliant and interest-oriented manner – wherever feasible already during the creation of the relevant Big Data application. The AI provider’s interest in collecting and using data in the framework of value creation must be harmonized with any rights of defense of (natural or legal) data subjects against the processing of their data. Thus, the “design” of the Big Data application may already contain a crucial legal component – the cautious and suitable treatment of data, which might be of substantial importance for the commercial success of the application.

Protect and monetize own data and information from AI output

The adoption of AI applications can create fresh, useful data and insights. The question arises as to whether and how these values might be preserved and perhaps monetized. The protective institutions of copyright law are of little use in these instances, insofar as the works are not entitled to the appropriate legal protection.

The data protection rules applicable in this jurisdiction apply solely to personal data, but not to other data and information (“machine data”). This also applies to the output of AI systems. Thus, there are (currently) no comprehensive legislative standards for the protection of machine data in particular.

This means that the security of the company’s own data and information created by AI systems may have to be protected “by design.” An important component of such a system might include, for example, technical protection against unauthorized access. It may be able to achieve the special legal database protection (Section 87a Copyright Act) by establishing an organized data gathering. However, because this only protects the data set in its entirety and not the individual data, the objective of comprehensive protection of one’s own important information may not be reached. Any such protection might perhaps be achieved via inventive ways within the meaning of the protection of trade secrets.

The rights of third parties to use the company’s own valuable data and information may then be determined at the contractual level. Content and boundaries of use can be determined, as can the remuneration to be paid for such use. To guarantee the efficiency and enforcement of such usage agreements, a variety of legal and judicial standards must be met.