Schalast | Labour law aspects of AI

I. Artificial intelligence and employee data protection

The integration of AI within employment relationships is almost inconceivable without the processing of personal data, posing numerous challenges from a data protection perspective. Particularly, the provisions of the General Data Protection Regulation (GDPR) are designed to impose certain limitations on this technological progress, especially as relates to the area of human resources. Additionally, on 21 May 2024, the Council of the European Union adopted the Artificial Intelligence Act (“EU AI Act”), which will enter into effect in the near future.

If a company’s HR department uses an AI system to automate processes, such as generating employment references or warnings, this must be approached with extreme caution from a perspective of data protection law. Such activities may fall under the regulatory scope of the AI Act. At this juncture, German labour law, data protection regulations, and the AI Act become inextricably linked.

1. General information

When does AI become relevant in terms of labour law? Since every AI system operates based on training data, the following groups are potentially affected: individuals whose data are processed within the system, those who operate the system, and those from whom the training data originate.

For example, one application area would be the fully automated scheduling of employees by IT systems. Work instructions could be issued to employees via algorithms on their digital devices. Courier and delivery drivers could receive routes through digital route planning. Extending these considerations further, an AI system could automatically prepare and even send warnings to employees if it detects violations of its issued instructions. Ultimately, an AI system could generate a dismissal notice automatically in the event of repeated breaches of duty, or for personal or operational reasons. Technical systems are already in use today to carry out social selection and calculate the volume of a social plan.

2. Applicability of the GDPR?

The GDPR applies to personal data, which includes all information related to an identified or identifiable natural person. Conversely, the GDPR does not apply to the use of AI if no personal or other sensitive information – such as business secrets, employment periods, or employee data – is processed using the AI application. For instance, the system could be designed to recognise and segregate such data in advance, automatically removing them from the outset. In such cases, the legal requirements of data protection regulations would exceptionally not apply.

3. Legal starting point

Problems can arise when using such systems, especially if employers do not transparently communicate which systems are being used. Employers are obligated under Article 13 GDPR to provide their employees with information about the processing of their data. Additionally, Article 15 GDPR standardises a right of access by the “data subjects”. This means that the data controller must comprehensively inform the data subject in clear and plain language about the purposes of the processing and the data being processed. Importantly, this also includes meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing for the data subject.

Article 22 GDPR provides data subjects with the right not to be subjected to a decision based solely on automated processing, which produces legal effects concerning him or her or similarly significantly affects him or her. Therefore, it is essential that the use of AI in issuing warnings, reprimands, references, and especially dismissals, involves human decision-making. Failing to do so risks violating Article 22 GDPR.

4. In particular: Automated applicant selection through AI

Software designed to assist recruiters in identifying the right candidate in a fully or partially automated manner is increasingly prevalent in applicant selection processes. These tools include programs that evaluate CVs and references, automatically create personality profiles based on video or telephone interviews, and algorithms that search social networks for suitable candidates (“active sourcing”). The use of AI in automated applicant selection offers numerous advantages, such as increased efficiency, cost reduction and, ideally, more objective pre-selection of applicants. However, this development also raises significant legal issues that must be carefully considered to avoid discrimination and other legal violations.

The use of personal data for automated applicant selection is a crucial issue. Typically, the data processed in AI-based selection decisions are personal data, governed by the strict rules of the General Data Protection Regulation (GDPR) and the German Federal Data Protection Act. The applicability of Section 26(1) Federal Data Protection Act, which permits data processing in employment relationships under certain conditions, is currently under controversial debate. Regardless, data processing in employment relationships may be permissible under Article 5(1)(c) GDPR. In this context, applicants are considered employees. However, this requires active involvement from the applicant, which is not the case with active sourcing – the employer-led search for personnel. Additionally, the specific data processing must be suitable for achieving the employer’s objective, appropriate, and limited to what is necessary. It must represent the least intrusive means, and the employer’s interests must outweigh the interference with the applicant’s personal rights. Therefore, whether data processing by AI systems is permissible can only be decided on a case-by-case basis, complicating the assessment of its general permissibility. If AI use in the application process is not permitted outright, the applicant’s consent for data processing could be obtained. However, this route is not viable for active sourcing. Moreover, the voluntary nature of consent in employment or application relationships is questionable due to the inherent power imbalance. The Joint Data Protection Conference has declared that voluntary consent is generally not feasible in employment relationships.

In addition, companies must comply with the prohibition on automated individual decisions as outlined in Article 22(1) GDPR. This means that decisions cannot be based solely on automated processing of personal data if they produce legal effects concerning the data subject or significantly affect them in a similar way. Therefore, an AI system may only make recommendations regarding applicants; the final decision to reject or hire an applicant must be made by a human.

Furthermore, data subjects affected by automated decisions have information rights under Article 13(2)(f) and Article 14(2)(g) GDPR, as well as a right of access under Article 15 GDPR.

II. Protection against discrimination

Another important topic in the discussion about automated applicant selection using AI is the potential for discrimination. The use of AI in personnel decisions within a company can lead to, or exacerbate, discrimination against individuals or groups of people.

If an employer uses AI systems in application and recruitment processes or in the performance assessment of employees and decisions are made by the AI, this can lead to discrimination under certain circumstances. For instance, when AI is used to evaluate incoming job applications, the assessment must always be carried out regardless of gender, origin, age, religion or race. However, AI systems often rely on data from past successful applications to evaluate new candidates. If a particular group, such as male applicants of a certain origin, were overrepresented among successful applicants, the AI system may favour applicants with similar characteristics. Consequently, this can result in the penalisation of other groups, such as female applicants, as has been observed in past models.

Another example is the AI-supported performance assessment of employees using an indirectly discriminatory algorithm. This can occur if a criterion for performance assessment is indirectly linked to characteristics such as gender, age or origin, as specified in Section 1 General Equal Treatment Act.

The use of artificial intelligence in labour law is generally permissible if legal regulations are adhered to and employees’ personal rights and protection against discrimination are ensured. However, the use of AI tools or software carries significant risks and can result in employer liability if discriminatory algorithms are employed. According to Section 7(1) General Equal Treatment Act, discrimination against an employee based on race, ethnic origin, gender, religion or belief, disability, age or sexual identity is prohibited. If this prohibition is breached, the affected employee or applicant is entitled to compensation under Section 15 General Equal Treatment Act. This compensation may include damages for pain and suffering or personal injury. Employers are liable if they could have prevented the discrimination or disadvantage by taking appropriate precautions, such as careful monitoring of AI systems and testing with training data. In such cases, the employer is responsible for the breach of duty (organisational fault).

Article 9 GDPR prohibits the processing of personal data revealing racial or ethnic origin, political opinions, religious or philosophical beliefs, or data concerning a natural person’s sex life or sexual orientation, unless explicit consent has been obtained or one of the other exceptions listed in Article 9(2) GDPR applies.


In the future, companies must also comply with the provisions of the new AI Act. This Act takes a risk-based approach, increasing the requirements for AI systems as the associated risk to the physical or mental health, rights, and self-determination of employees rises. Notably, the requirements for high-risk systems under the AI Act are very detailed and exceed those mandated by data protection law.

AI systems must now be categorised into different risk levels for testing purposes, including high-risk systems, low-risk AI systems and basic AI models (referred to as general-purpose AI). AI systems with an unacceptable risk (Article 5 AI Act), i.e. those that can suppress humans, are entirely prohibited. High-risk AI systems are permitted but are subject to particularly strict requirements. These obligations apply not only to users but also to providers and deployers of AI systems, including organisations and employers that develop AI systems for purely internal use. Users of AI systems also have a responsibility to ensure that they are used only within legal limits.

Many AI systems used in the human resources sector, such as those related to job applications, training and professional development, fall into the category of high-risk systems as defined by Article 6 AI Act. These systems can pose significant risks to the health, safety or fundamental rights of individuals and are exhaustively listed in Annexes I and III of the AI Act.

According to Annex III, item 4, high-risk systems include those used for the recruitment, selection, evaluation, monitoring, promotion or dismissal of employees. Consequently, numerous AI systems in the human resources domain will be impacted by these provisions.

The use and deployment of high-risk AI systems entail extensive obligations for companies and employers, as specified in Articles 16 to 27 AI Act. These obligations include specific monitoring, documentation and transparency requirements; the establishment of a risk management system; due diligence in data selection to avoid discrimination; information and training obligations; ensuring the intended use of AI; and implementing human supervision.

IV. Co-determination of the works council in the introduction of AI technologies

Under Section 87(1)(6) Works Constitution Act, the works council has a right of co-determination concerning the “introduction and use of technical devices designed to monitor the behaviour or performance of the employees.” The capability of AI systems to process and evaluate vast amounts of data and draw conclusions about employees’ current and future behaviour makes this provision and the right of co-determination particularly significant in this context.

This is because today’s technical capabilities for monitoring personnel clearly conflict with the principle established in Section 75(1) sentence 1 Works Constitution Act, which mandates that company partners have to safeguard and promote the free development of employees’ personalities. The mandatory co-determination right of the works council when introducing such systems is designed to safeguard against unauthorised interference in employees’ personal spheres.

Monitoring under Section 87(1)(6) Works Constitution Act may be conducted using optical, acoustic, mechanical or electronic devices that have an independent monitoring effect. It is not sufficient to use technical devices merely as an aid to personal surveillance. The intention to monitor or subsequent actual use is irrelevant; rather, it is sufficient if the technical device is objectively suitable for monitoring.

The AI Act also introduces significant regulations concerning works council co-determination. The introduction and use of high-risk systems for recruiting and selecting new employees are subject to mandatory co-determination under Section 87(1)(6) Works Constitution Act.

Pursuant to Section 80(3) Works Constitution Act, the works council has the right, with the employer’s agreement, to consult experts who can provide the necessary knowledge to perform its duties. Since the introduction of the Works Council Modernisation Act, Section 80(3) sentence 2 specifies that if the works council needs to assess the introduction or application of artificial intelligence to carry out its tasks, the involvement of an expert is deemed necessary. Therefore, the usual examination of necessity for consulting experts does not apply in this case. However, the expert must still be suitable, and the costs must be proportionate. The same conditions apply if the employer and works council agree on a permanent expert for these matters.

However, according to older case law of the Federal Labour Court, the works council does not have the right to initiate the introduction of technical equipment beyond the right of co-determination, despite some reservations expressed in the literature. The Federal Labour Court argues that the purpose of Section 87(1)(6) Works Constitution Act is primarily defensive. This means the works council can suggest the introduction of AI, but the employer’s decision to abolish AI can be made without the works council’s co-determination.

The co-determination of the works council regarding AI can (and should) be regulated within the framework of works agreements. This is particularly important given that the development and implementation of AI in companies typically outpace legislative responses. Since co-determination law does not recognise a materiality threshold in this context, it may be advisable to expedite approval processes by having the employer and works council conclude a framework agreement with general provisions, supplemented by application-specific individual agreements.