How the AI Act could unintentionally impact access to healthcare

DISCLAIMER: All opinions in this column reflect the views of the author(s), not of Euractiv Media network.

Content-Type:

Advertiser Content An Article that an external entity has paid to place or to produce to its specifications. Includes advertorials, sponsored content, native advertising and other paid content.

[Credits: MedTech Europe]

Artificial Intelligence (AI) promises to support humans daily, assist with routine tasks, and advance human knowledge. It has great potential to improve patient outcomes and healthcare systems.

Alexander Olbrechts is Director Digital Health at MedTech Europe.

AI solutions are found in wearables or imaging devices that help predict heart failures, continuously monitor glucose levels, skin cancer self-scanning solutions, or personalised apps that provide ad-hoc counselling to help change health-related behaviour. It is also used in robot-assisted surgeries and to optimise clinical trials by using real-world data in patient recruitment. A recent study commissioned by MedTech Europe showed that AI in healthcare could save around 400 000 lives annually and up to 200 billion Euros in Europe.

MedTech Europe represents the voice of the medical technology industry, including diagnostics, medical devices and digital health. The medical technology industry envisions an environment where safe, high-quality, and trustworthy AI in medical technologies improves healthcare and patient outcomes and a regulatory landscape that supports access to safe medical devices, including to AI-enabled ones.

Alignment between the AI Act and sectoral legislation is critical to facilitate continued access to innovative healthcare

AI-enabled medical technologies have been accessing the EU market safely for years, accelerated by the introduction of the Medical Devices Regulation and the In Vitro Diagnostic Medical Devices Regulation (MDR/IVDR). As product legislation, they include a robust set of requirements ensuring the safety of medical devices and in vitro diagnostics for patients and healthcare professionals and fostering innovation of such devices’ AI components. They are part of the New Legislative Framework, the blueprint product-safety framework.

Under the proposed AI Act, medical or in vitro diagnostic medical devices that are themselves an AI system or use an AI system as a safety component, would be subject to the MDR/IVDR and the AI Act. The Act will determine how and if new AI-enabled medical technologies will be placed on the market and reach hospitals and patients.

Creating a level playing field for actors involved in the healthcare and AI ecosystem is crucial to improving health systems throughout Europe. Setting coherent rules and consistency across applicable EU laws is essential to overcome the risk of fragmentation, duplication, or even conflicting requirements. Rather, the aim should be to provide the necessary technical information to authorities as coherently and transparently as possible.

If the above is not adequately addressed, the EU may face additional challenges to an already burdensome regulatory framework, contributing to shortages of medical devices across Europe. This issue was raised during the recent meeting of Health Ministers on 09 December 2022 and the following European Commission’s proposal to amend the MDR. Conflicting regulatory requirements may prevent manufacturers within the EU from delivering innovative solutions to European patients and healthcare systems.

MedTech Europe calls for coherent definitions and requirements to ensure patient safety

MedTech Europe welcomes the intended goal of the AI Act to facilitate innovation while ensuring the protection of fundamental rights. However, clarity across multiple areas is needed to provide benefits for health outcomes and healthcare systems:

According to the proposal’s risk-based approach, all AI-enabled medical technology would be considered high-risk as they must undergo conformity assessment under the MDR/IVDR. However, medical technology might not be categorised as such under sectoral legislation. For example, self-scanning solutions may be classified as medium-risk under MDR but considered high-risk under the AI Act.

The Act introduces definitions that would conflict with those established in the MDR/IVDR. As already expressed in an industry statement last week, areas needing refining include an overly broad definition of AI systems that inadvertently considers all software in medical technology as AI, which is not the case. In addition, the inconsistent use of ‘risk’ and the missing definitions of ‘risk’ and ‘harm’ are essential elements for qualifying and implementing the risk-based approach.

The requirements of the AI Act, like those of the MDR/IVDR, are intended to be supported by harmonised standards. Further misalignments will ensue by introducing horizontal standards as they will not consider the sectoral specifics of the medical technology industry, resulting in standards incompatible with those of MDR /IVDR. These and other issues have already been voiced by a range of industry stakeholders back in September. The message is clear: The AI Act needs to be properly integrated into the already existing regulatory environment.

There are also inconsistencies as regards to requirements for high-risk AI systems, such as data and data governance, technical documentation and human oversight. It can be debated that any data set is sufficiently representative or free of bias. In reality, data sets within healthcare can be constrained by the omission of features including gender and age due to anonymisation requirements.

Data selection and omission in AI require a balanced regulatory approach to privacy and healthcare innovation. All stakeholders within the healthcare sector should be able to influence and optimise dataset completion in any AI use case seeking to improve healthcare and healthcare delivery. Both the AI Act and the sectoral legislation require providers to submit technical documentation, which, when analysing the two legislative texts, have overlapping or inconsistent requirements likely resulting to the need of submitting two separate sets of documentation.

The AI Act must ensure balanced human oversight and intervention as excessive human interference and a lack of understanding of human factors could negatively impact the benefit-risk ratio of medical devices.

Most importantly, conflicting requirements are imposed on conformity assessment, quality management systems, notifying authorities and notified bodies. All of these elements are already regulated under the MDR/IVDR, which risks creating two systems, one applicable to the AI component of a device and the other to the MD or IVD component of a device, potentially further delaying AI-enabled medical products being placed on the market.

MedTech Europe urges Member States and EU decision-makers to strongly consider the impact of the proposal on the Member States’ national healthcare systems and engage with the broad group of stakeholders in this field to ensure that the final regulation allows for AI-enabled medical technologies’ potential benefits to be available to patients and healthcare professionals.

Subscribe to our newsletters

Subscribe