Schalast | AI Aspects under Employment Law
I. AI and employee data protection
The processing of personal data is practically necessary for the use of AI in the context of job interactions, which in turn presents several issues for the working world in terms of data protection. Limits to this technological advancement may be imposed, at least in part, by the requirements of the General Data Protection Regulation (hereinafter, GDPR).
Extreme caution is required in regard to data privacy regulations if, for example, a company’s HR officer employs such a technology to automate tasks and employment references or warnings are produced with the assistance of an AI system. At that time, the laws governing data privacy and employment in Germany are closely intertwined.
1. General considerations
When will AI be of relevance in employment law? The individuals whose data is processed by the AI system, the individuals who operate the AI system, and the individuals from whom the training data is derived are all at risk since the AI system constantly operates on the basis of training data.
It might be feasible, for example, that personnel scheduling could be entirely automated by IT systems. Employees’ electronic devices would receive instructions from an algorithm. Digital route planning might potentially be used to direct delivery and courier drivers. Taking these concerns further, as soon as an AI system identifies infractions of the orders it has issued, it might immediately draft or even send a warning to the person in question. Finally, in cases of frequent duty violations and terminations for personal or operational grounds, an AI system may even automatically issue termination notices. In addition to facilitating a social selection procedure, technology solutions are already being used to evaluate the scope of a social compensation package.
2. Applicability of the GDPR?
All data pertaining to an identified or identifiable natural person is under the GDPR’s purview. The inverse is also true: if no personally identifiable information or other sensitive data (such as trade secrets, employment dates, or employee data) is handled by the AI tool, the scope of application is not expanded. It is possible, for instance, that the system may anticipate the presence of such information, classify it accordingly, and eliminate it automatically. The requirements of data protection laws would exceptionally not apply in this scenario.
3. Legal basis
When employers are not forthcoming about the systems they use, confusion and inefficiency might result from the application of corresponding systems. Article 13 GDPR requires that data subjects be informed about how their data is being used by their employers. Article 15 GDPR also provides that data subjects have a right of access to their personal data. Data subjects have a right to be informed of the purposes of processing and the data processed, as well as the logic involved, the scope of the processing, and the intended effects of such processing, and this information must be provided by the controller in a transparent and easily understood manner.
Data subjects also have the right under Article 22 GDPR to not be the object of automated individual decision-making that produces legal effects concerning them or similarly significantly affects them. As a result, it is important to remember that AI cannot replace humans entirely when it comes to making important decisions like issuing cautions, warnings, references, and, most importantly, notices of termination. If not, you could be violating Article 22 GDPR.
4. In particular: Automated application screening by AI
Recruiters are increasingly turning to tools to assist in automating, at least in part, the application selection process. They may utilize algorithms that scour social media for qualified individuals (“active sourcing”), as well as systems that analyze applicants’ credentials and references or generate personality profiles based on video or telephone interviews. There are several benefits to using AI for automated application screening, including improved productivity, lower costs, and (hopefully) a more objective pre-selection of candidates. There are, however, concerns under employment law that need to be carefully evaluated in light of this new development to avoid any claims of discrimination or other breaches of the law.
In this case, the problem of using individual data for computerized application screening is at the forefront. Personal information is often handled as part of AI-based selection judgments, which is subject to stringent regulations under laws like the EU’s General Data Protection Regulation and Germany’s Federal Data Protection Act. According to Article 26(1) sentence 1 alt. 1 Federal Data Protection Act, the processing of personal data in the context of an employment relationship is lawful to the extent that it is required for the decision to enter into an employment. Applicants are equivalent to workers in this scenario. An application requires, however, an action by the applicant, which is not given in cases of active sourcing, in which an employer actively searches for individuals. In addition, the specific data processing must be acceptable for attaining the purpose requested by the employer. The intrusion into the applicant’s rights must be minimal, and the employer’s interests must be more important. Decisions on the appropriateness of data processing by AI systems may thus only ever be made on a case-by-case basis, complicating the evaluation of abstract permissibility. Unless otherwise required by Section 26(1) sentence 1 alt. 1 Federal Data Protection Act, the use of AI throughout the application process is optional and the applicant’s permission might be obtained in another way. For active sourcing, this is likewise not a viable choice.
In this specific case, companies must also comply with Article 22(1) GDPR’s ban on automated individual decision-making. Therefore, decisions that have legal repercussions or significantly impact the data subject may not be relied only on automated processing of personal data. Therefore, an AI system may only make an initial selection of applications and cannot replace a human’s ultimate judgment to reject or employ an applicant.
Finally, under Article 13(2)f GDPR and Article 14(2)g GDPR, as well as Article 15 GDPR, data subjects whose personal data is used in an automated decision-making process have the right to be informed and to have access to such data.
II. Safeguards against bias and prejudice
The possibility for automated application screening by AI to discriminate against applicants is another major issue of concern. A company’s use of AI in employment choices has the potential to introduce or exacerbate bias.
Discrimination can occur if an employer relies on and makes choices based on AI systems throughout the application, hiring, and review procedures. For instance, AI could be used to screen job applicants using an algorithm that takes into account their gender, country of origin, age, religion, and/or race. This could result in discrimination against women, who would be less likely to be considered for the position because of the algorithm’s bias. Another instance is when a discriminatory algorithm that relies on AI to evaluate employee performance then uses that evaluation to indirectly discriminate on the basis of another feature (such as gender, age, country of origin, etc.) listed in Section 1 General Act on Equal Treatment.
It is generally accepted that AI may be used in the field of employment law, so long as the relevant statutes are followed and employees’ privacy and protection from discrimination are safeguarded. On the other hand, if discriminatory algorithms are used, an employer may be held liable for the actions of its AI tools or software. Claims for payment of General Act on Equal Treatment compensation pursuant to Section 15 General Act on Equal Treatment and, in certain circumstances, compensation for pain and suffering or damages for personal injury are available to employees or applicants who have been discriminated against in violation of Section7(1) General Act on Equal Treatment, which prohibits discrimination based on race, ethnic origin, gender, religion or ideology, disability, age, or sexual identity. The employer is liable if it could have avoided the discrimination or disadvantage by taking adequate precautions such as attentive monitoring of AI systems, performing tests using training data, etc., and is consequently responsible for the breach of duty (organizational fault).
Data revealing a person’s race or ethnicity, political opinions, religious or philosophical beliefs, or information about the person’s sexual life or orientation may not be processed without the individual’s explicit consent or one of the other exceptions specified in Article 9(2) GDPR.
Protections against discrimination in the workplace might see future law updates at the Union level thanks to advances in AI technology. The European Commission proposed a policy to create standardized norms for AI (Artificial Intelligence Act) on April 21, 2021. One of the goals of the submitted proposal is to further protect rights protected by the EU Charter of Fundamental Rights, such as human dignity, respect for private life and protection of personal data, nondiscrimination, and equality between women and men (Recital 5 Draft AI Regulation).
In spite of the existence of anti-discrimination laws, employers would do well to take extra precautions to prevent discrimination in the workplace by incorporating anti-discrimination measures into the development, implementation, and evaluation of AI systems.
III. Co-determination of the works council in the introduction of AI technologies
The works council has to be consulted in the “introduction and use of technical equipment intended to monitor the behavior or performance of employees” under Section 87(1)(6) Works Council Constitution Act. The provision of the right of co-determination in this context is a hot topic in particular because of the potential of AI systems to analyze and assess vast volumes of data and to draw conclusions about the present and future conduct of employees.
The current technical possibilities for monitoring personnel run counter to Section 75(1) sentence 1 Works Council Constitution Act, which states that the employer and the works council representatives must protect and promote the free development of the personality of employees working in the company. The works council is required to have a say in the implementation of such systems to prevent any unwarranted intrusions into the realm of individual autonomy.
A separate monitoring effect may be achieved by the use of optical, acoustic, mechanical, or electrical instruments while conducting monitoring under Section 87(1)(6) Works Council Constitution Act. For this objective, merely employing the technological tool as a means of self-surveillance is insufficient. Whether or not the monitoring device will be used is irrelevant; what matters is that the technology is objectively suited for monitoring. As long as internal applicants are not also assessed with this tool, Section 87(1)(6) Works Council Constitution Act may not apply to AI-based application management systems that help recruiters find a “perfect fit” or allow conclusions to be drawn about an applicant’s personality.
According to Section 80(3) Works Council Constitution Act, the works council has the authority to consult with outside experts to assist it in performing its duties, provided that it first obtains the employer’s consent. Section 80(3) sentence 2 Works Council Constitution Act has been updated to reflect the Works Council Modernization Act’s requirement that the works council consult with an expert if it must evaluate the introduction or application of AI to carry out its responsibilities. As a result, this situation does not call for the scrutiny of need that is normally required prior to consulting experts. However, the expert needs to be qualified, and the prices need to be reasonable. The same holds true if the employer and the works council agree on a full-time expert on these issues.
Although concerns are voiced in the literature, Federal Labor Court case law states that a right of initiative of the works council to implement a technical device that exceeds the right of co-determination is to be refused. Based on Section 87(1)(6) Works Council Constitution Act, the Federal Labor Court concludes that the primary role of the right to co-determination is defensive. The works council may only recommend AI implementation at best. The company’s intended elimination of AI may proceed, however, even if the works council does not agree.
Works agreements can (and should) govern the works council’s right to participate in AI policymaking, especially considering how quickly AI is developing and how quickly businesses are adopting it without applicable legislation. Since there is no materiality requirement for the right of co-determination, it may be beneficial to have the employer and works council reach an agreement on a framework with general rules and application-specific individual agreements to speed up the approval process.