Schalast | AI in Healthcare & Life Sciences

AI is currently at the centre of attention in the healthcare and life sciences sector. It is predicted to play a key role at all levels in the future. AI is expected to support doctors in diagnosis and therapy, or individuals in health and disease management. It is expected to take the medical and pharmaceutical industries to a new level in research and development. It is seen as a "revolutionary beacon of hope". Some even say that the industry is on the verge of a "reinvention".

The number of "digital health" / "digital medicine" apps, which are designed to make it possible to "track" vital signs and even symptoms ("self-tracking") regardless of time and space, is increasing rapidly. The data collected in this way can provide important insights for diagnoses and therapies through the use of AI if it is forwarded to the relevant systems - such as those of treating doctors or hospitals. AI is thus becoming a digital assistant.

An AI-based app that gives patients "treatment instructions" is no longer science fiction, but is increasingly becoming a reality. For example, based on a patient's blood glucose level, which it automatically receives from their blood glucose meter, an app can make customised nutritional suggestions with an appropriate insulin dose.  Furthermore, AI applications are already established in the evaluation of imaging procedures (e.g. CT, MRI) through deep learning and support doctors in routine tasks. This is because AI is now superior to the human eye. In the near future, mobile image-guided AI diagnostic systems (so-called "Image-Guided Medical Diagnosis") will allow "remote diagnoses".  The approach is also being pursued of soon being able to perform operations via the internet regardless of location using robot-assisted tele-surgery, whereby the AI will analyse surgical data from multimodal sources in real time.

However, this poses considerable legal challenges:

AI software as a medical device

If AI software is used for medical purposes, it constitutes a medical device, regardless of whether the AI is merely an integrated part of a medical device (so-called embedded software) or whether it is already an independent medical device (so-called stand-alone software), for example in the form of an app.

AI software that serves medical purposes is subject to the requirements of the Medical Device Regulation (EU 2017/745 - MDR). Art. 2 No. 1 of the MDR expressly covers software and focuses on the medical purpose, which will regularly be affirmed in the case of AI-supported eHealth apps. The intended purpose is determined by the manufacturer of the app. This in turn determines which risk class it is to be assigned to in accordance with the MDR. The MDR recognises four risk classes (I, IIa, IIb; III), which are based on the risk posed to the user by the application. Only lifestyle apps that track fitness data, for example, are not medical devices.

The app is therefore assigned to a risk class, on the basis of which the so-called conformity assessment procedure is determined before a state-authorised body known as a "notified body" (e.g. TÜV Nord/ Süd, DEKRA etc.).These bodies carry out tests and assessments and then issue the medical device with CE certification. When determining the risk class, the main question is what potential harm to health a software or app could cause. With the entry into force of the MDR, the assignment to a risk class has been tightened on the one hand, and on the other hand the rule applies that in case of doubt, the higher risk class must always be assumed.  Rule 11 in Annex XIII of the MDR stipulates that software that is used for decision-making in therapy and diagnostics or monitors physiological processes is subject to at least risk class IIa. Therefore, with a narrow legal interpretation of the MDR, hardly any "AI medical app" still has a chance of being categorised in risk class I.

The MDR in practice using the example of the so-called "DiGAs"

In practice, the picture is different. For example, some of the so-called digital health applications (DiGAs), which are reimbursable by the statutory health insurance as medical devices of risk classes I and IIa are assigned to risk class I, although a narrow interpretation of the wording of the law would actually lead to a risk class IIa. Practice therefore shows that in reality there is more room for manoeuvre than the MDR initially suggests. DiGAs are of course particularly attractive for doctors and patients due to their prescribability ("app on prescription"). Start-ups and established app manufacturers should keep this in mind. Although it must be proven by means of an evidentiary study that the app has real "efficacy" and thus a benefit for care, it is then also authorised by the Federal Institute for Drugs and Medical Devices (BfArM) in the so-called "fast-track procedure". These are then listed in the DiGA directory at the BfArM, which currently lists 55 authorised DiGAs (as of 11/2023).

Effects of the AI Regulation draft (AIA - E) on medical devices

The Council of Europe has formed a Committee on Artificial Intelligence, which is intended to create a legally binding instrument for the regulation of AI and which, among other things, deals with the AIA-E (Artificial Intelligence Act, COM (2021) 206 - "AIA-E"). The relationship between the AIA-E and the MDR and the national supplementary provisions has not yet been fully clarified.  The definition of AI in the draft AI Regulation is likely to be far too broad for medical devices. The AI Regulation defines four different risk classes. In accordance with the AIA-E, such medical devices are high-risk AI systems that require a conformity assessment procedure by a notified body in accordance with the MDR and therefore all AI medical apps are in risk class IIa. AI-controlled medical apps in risk class IIa or higher therefore always represent a "high-risk AI system" within the meaning of the AIA-E.

A large number of AI-based medical devices will therefore qualify as "high-risk AI systems". As a result, categorisation in accordance with the AIA-E is therefore independent of the actual risk that exists when the medical device is used. The MDR and the draft are therefore not harmonised.

In contrast to the MDR, the AIA-E does not primarily use the term "manufacturer", but rather that of "supplier". The AIA-E stipulates that the manufacturer has the obligations of a supplier if it places the AI on the market under its own name. According to the AIA-E, high-risk AI systems must also undergo their own conformity assessment procedure. The AIA-E imposes new requirements that go beyond the MDR. As a result, this will lead to a considerable additional burden for app developers in the conformity assessment procedure and to increased costs, because in addition to the high requirements of the MDR, the increased obligations of the AIA-E must also be fulfilled. Players must therefore ensure in good time that their AI-based app fulfils the increased safety requirements of the upcoming AI Regulation and adapt their compliance systems in good time.

Liability for AI medical devices

This raises the question of who is actually liable for an AI medical device that has been developed with the involvement of numerous stakeholders on its way to being used in humans. The MDR itself does not contain any liability provisions; however, Section 31 MDR refers to the Product Liability Directive (PLD) and thus to the (still applicable) German Product Liability Act (“ProdHaftG”). In principle, product and tort liability under German law in conjunction with standards from the MDR can be considered. Liability in tort includes so-called fault-based producer liability for design, manufacturing and instruction errors or the breach of product monitoring obligations.  In contrast, Section 1 of the German Product Liability Act provides for non-contractual, strict liability in the event that a defective product causes death, personal injury or damage to property other than the defective product.

The EU draft of the new Product Liability Directive (COM/2022/495 - PLD-E) also covers all products in the healthcare and life sciences sector and therefore all products that utilise new digital (AI) technologies. In the area of medical devices, however, the clarification in Art. 4 No. 1 of the draft is not an innovation. The MDR already standardises software as a medical device.

The EU Commission's draft "Directive on AI Liability" (COM/2022/496), which supplements the AI Regulation, extends the existing non-contractual, strict liability based on the Product Liability Directive to include a civil law framework for damage caused by AI systems. As a result, however, the drafts of the AI - Regulation and the AI Liability Regulation will be less relevant than the new Product Liability Directive.

For high-risk medical devices, the European Court of Justice (ECJ) ruled on 5 March 2015 (Case C-503/13 and C-504/13) that the defectiveness of all devices of the same model can be inferred from a (potentially) defectively produced medical device (here: pacemakers and implantable cardioverter defibrillators). It is likely that this case law will be transferred to AI medical devices.

AI in Pharma

Pharmaceutical companies expect AI technologies to accelerate the development of medicines and make them more cost-effective. The field of personalised medicine is seen as a significant area of application for AI software in the future. For example, AI software that is specifically "trained" to use a particular drug therapy can support doctors with the composition of active ingredients and dosage. AI is already being used to design new drugs ("silico molecular modelling"). This involves comparing the chemical structure of a potential active ingredient molecule with known molecules in computer simulations at the start of drug development and predicting what effects the substance is likely to trigger in the body. Pharmaceutical manufacturers are also already using AI to make supply chains more resilient.

As a result, the question arises as to whether AI software used to develop new medicinal products requires its own authorisation under pharmaceutical law. Some argue that the software should be authorised as part of the medicinal product. However, the legal issues in this area have not yet been clarified.

Conclusion

The requirements for the fulfilment of the various legal provisions for AI systems in the healthcare and life sciences sector are rightly high, as they are intended to ensure the protection of life and health. The "disharmony" between existing and future legal regulations that has been pointed out is also increasing complexity. This makes it all the more important to involve legal experts at an early stage in the development process in this highly regulated area.

Read more about our Healtcare & Life Sciences topics here.