Keeping an eye on its potential effects on the healthcare sector
Publication date: 20th of December, 2023
// Key Takeaways //
The likely upcoming adoption of the AI Act is pending joint approval by the European Parliament and the Council of the EU and is expected to have a transversal impact on the healthcare sector research and innovation processes, as well as on the business projections of the industry
The AI Act will be driven by a risk-based approach aimed at mitigating any potential harm to fundamental rights while giving room for innovation
Given the healthcare sector’s sensitivity to the prospects and risks of AI technology, it will be imperative to understand the interplay of the AI Act and the rest of the regulatory instruments related to data governance, medical products and healthcare provision in general
The case of AI medical devices and built-in discriminatory biases illustrates the tension between the opportunities for innovation and the challenges that AI implementation and its regulation bring to healthcare delivery
Despite the progress announced towards the AI Act adoption, many questions remain as to the harmonisation and alignment of terminological definitions, and the interaction of obligations and compliance assessment mechanisms across regulations
The balance between, on the one hand, the protection of fundamental rights and product safety, and on the other hand, support for technological innovation, development and deployment, has been at the heart of the discussion on the European artificial intelligence (AI) regulation from the outset (European Commission White Paper on AI, 2020). In this vein, the likely upcoming adoption of the AI Act is expected to have a transversal impact on the research and innovation processes in the healthcare sector, as well as on the business projections of the industry.
Although it is still too early and there are still many unknown issues to make any judgments about the forthcoming rules, at PredictBy we are aware of the challenges that the AI Act’s adoption and implementation will entail. As part of our mission to enhance decision-making in the face of digitalisation, we are looking forward to contributing to the construction of knowledge and practical solutions to integrate AI in health in a both safe and sustainable way.
A POLITICAL AGREEMENT HAS BEEN REACHED
After three hectic days of negotiations (6-9 December 2023), in the fifth and last Artificial Intelligence Act trilogue held by the authorities of the European Council, Parliament and Commission, a “political agreement” was reached to advance in the long-awaited European regulation of AI. Although the final text is not yet known, the AI regulation proposal initially launched in 2021 by the European Commission is thus set to reach its final stage, pending the formal joint approval by the Parliament and the Council. According to the Chair of the trilogue (the Secretary of State for Digitalisation and Artificial Intelligence of the Spanish government, Carme Artigas), the final step is expected to be taken in the first months of 2024, before the Parliament is dissolved in view of the EU elections to be held next June.
If the text of the norm is finally enacted in line with the agreed adjustments to the original proposal, the AI Act will be driven by a risk-based approach to mitigate any potential harm to fundamental rights that AI products may pose. To this end, the regulation will be structured on a categorisation scheme, which will rank AI systems based on several risk categories with differentiated and incremental obligations.
The final draft and fine print of the definitions of the different categories and obligations remains to be known (the devil is in the details!), but, as reported by the European Parliament and Council, absolute bans will apply to applications or systems that pose "unacceptable risk", such as those that perform biometric classification based on sensitive data (e.g., race, political opinions, religion, etc.), indiscriminate extraction of images for facial recognition, emotional recognition in work or educational environments, behavioural manipulation or exploitation of vulnerabilities (e.g., physical and socio-economic). To identify a second type of case -those considered "high risk"- a series of filters will be established based on the potential damage to citizens’ rights and the sensitivity of the area of incidence. So far, those applications that may affect health, safety, the environment, democracy and the rule of law, in areas such as education, employment, public services or law enforcement, have been (non-exhaustively) cited as “sensitive” in the press reports released.
Remarkably, it was agreed to introduce a novel requirement to conduct a “fundamental rights impact assessment”, which is expected to be mandatory for public or private entities that provide essential public services and intend to deploy an AI system (such as hospitals, schools, banks). In addition, it has also been settled to impose general transparency obligations for General Purpose AI (GPAI) systems—a concept that encompasses the so-called ‘foundational models’, as well as more stringent requirements for those models that exceed a certain impact threshold and pose systemic risks.
For the purposes of implementing the regulation, a multi-level governance architecture combining controls at the European level (with coordination and systemic risk management functions) and at the national level has been agreed upon. Sanctioning consequences for non-compliance would reach significant amounts, set as a certain percentage of the companies' turnover or through fixed amounts (the higher prevailing). The proposed AI Act is expected to come into force two years after its enactment, a period that would be reduced to only 6 months for those provisions on banned applications—i.e., of unacceptable risk.
Finally, substantial changes are reported in the provisions specifically aimed at supporting innovation. It was announced that there is an intention to reduce the burden for low-risk applications and SME compliance requirements, as well as to encourage controlled deployment by regulating sandboxes and testing mechanisms in real-world conditions. Details remain to be disclosed.
AI ACT AND INTERPLAYING REGULATORY FRAMEWORKS IN HEALTHCARE
The sensitivity of the Healthcare sector in relation to data governance, protection of fundamental rights and public and private decision-making is well known. While the AI proposal only mentions "healthcare" and the "health sector" once as potential beneficiaries of AI implementation (AI Act proposal, Recitals 3 and 28), the risks linked to the potential harm to "health" are repeatedly mentioned. Given the transcendental impact that AI technologies and their regulation are expected to have on clinical practice, hospital management, the development of medical products or the handling of public health emergencies, among many other issues, it is essential to start assessing the possible scenarios that the ecosystem's stakeholders will have to deal with.
Indeed, the incorporation of AI applications in the Healthcare industry raises the bar for regulatory scrutiny, as the different requirements and types of risk that regulations cover overlap and complement each other. At the same time, the use of technology presents a sea of opportunities to innovate, and improve practices and services. In this evolving context, there will be many implementation and regulatory harmonisation challenges to face. Soon it will be imperative to understand the interplay of the AI Act and the rest of the regulatory instruments related to data governance, medical products and health in general (such as GDPR, MDR, IVDR, EHDS, Data Governance Act, Data Act, as well as local regulations), as it is certain that both the industry and the competent authorities will need to adjust their development, oversight and implementation capacities and processes.
THE CASE OF AI MEDICAL DEVICES AND BIAS: A PARADIGMATIC EXAMPLE
The case of AI medical devices and built-in discriminatory biases illustrates the tension between the opportunities for innovation and the challenges that AI implementation and its regulation bring to healthcare delivery.
Interest in and use of AI medical devices is growing steadily, as is its deployment (see, for example, a list of AI/ML-enabled Medical Devices by the FDA here). At the same time, the use of these devices entails ethical and legal issues derived, for example, from the automated reproduction of discriminatory biases embedded in the training data or the design of the algorithm. In such a scenario, the EU regulatory framework needs to balance its interplaying objectives of protecting citizens against discrimination, improving healthcare provision, and ensuring the proper functioning of the internal market.
To this end, and at least until the enactment of the AI Act, the main instrument for AI medical devices assessment in Europe is the Medical Devices Regulation (MDR) which establishes four different classes of medical devices according to a risk scale. While AI systems can be integrated as software into a principal medical device, or be a (software-based) medical device in themselves, under the current regulatory understanding what determines if the device falls or not into the Medical Device Software (MDSW) category and the consequent applicable rules, is their intended purpose (MDCG 2019-11 - Guidance on Qualification and Classification of Software in MDR and IVDR). As a matter of fact, under this Guidance, most of the products that use AI, either in an accessory or primary way, correspond to MDR higher risk categories IIa, IIb or III (MDR, Ch. 3, Rule 11) and are therefore ruled by a specific review regime to enter the European market. This pre-market assessment regime shows several challenges for both the detection and mitigation of bias, and the interplay of the applicable legal frameworks.
The review process is conducted by private notified bodies that grant the conformity assessment (CE marking) and is intended to ensure product safety (which, among many other issues, encompasses the detection of unlawful biases). As an MDWS, an AI medical product is thereby subject to a more thorough review of the documentation and clinical evaluations (van Kolfshooten, 2023). Interestingly, it is in this process that for instance logical interactions and tensions with the data protection regulation (GDPR) arise, as the latter becomes applicable due to the personal and sensitive nature of the data used in the clinical trials that need to be reviewed.
At this point, there are two problems that notified bodies must face: the interaction between the legal provisions on product safety and the privacy of the personal data used, and the increasing complexity of the algorithms introduced in medical devices. These parallel challenges could prove very burdensome for notified bodies' capabilities and result in negative incentives affecting approval and implementation circuits.
The entry into force of the AI Act will likely impact these processes for AI medical devices as well. For instance, the AI Act proposal (as currently drafted) assumes a cross-cutting role and provides solutions for the overlapping of its "risk" categories with those provided for in the MDR (Art. 6 and Annex II AI Act proposal). In practice, according to the available text, medical devices corresponding to risk categories IIa or higher according to MDR will have to comply with the rules stipulated by the AI Act for the "high risk" category, which is expected to include, for the purpose of mitigating bias, data quality and human oversight obligations. Also, for the same purpose, the proposed AI Act provides for an exception to GDPR´s Art. 9 that would allow supervisory bodies to analyse algorithms trained with sensitive information (by implementing other safeguards).
There are still many questions regarding the alignment of certain terminological definitions, the interaction of obligations and compliance assessment mechanisms between the different regulations. It also remains to be seen how the technical limitations of the supervisory bodies in dealing with large volumes of sensitive data and complex and constantly updated technology will be dealt with. At the same time, the European Health Data Space (EHDS) advanced initiative is an example of how European policymakers expect to contribute to the creation of a secure and quality health data environment that (among many other things) facilitates the mitigation of bias in the development and deployment of medical devices. In any event, the expected regulatory updates will generate new possibilities and new needs to work on in order to drive innovation in the sector and enhance healthcare service delivery.
Related published projects:
EUMEPLAT: European Media Platforms: Assessing Positive and Negative Externalities for European Culture
GATEKEEPER: Connecting healthcare providers, businesses, entrepreneurs, and elderly citizens through an innovative platform to improve healthcare provision
Written by: Lucas Segal
Comments