Introduction
The European Council recently gave the green light to the EU AI Act, marking a significant milestone in the regulation of artificial intelligence (AI) across Europe. The Act is expected to be published in the EU’s Official Journal in the coming days and will come into force 20 days post-publication. Compliance with the Act’s provisions will be mandatory within two years, although certain stipulations will take effect sooner.
Given the urgency and impact of this new regulation, it is crucial to understand the key aspects and prepare accordingly. This article provides an overview of the EU AI Act, focusing on high-risk AI systems, their specific implications for the health sector, and the immediate actions stakeholders should consider.
Overview of the EU AI Act
The EU AI Act is set to be the most comprehensive AI regulatory framework globally, applying across various sectors. Its primary objective is to ensure AI systems are developed and used in a manner that is safe, transparent, and respects fundamental rights. Key highlights include:
- Risk-based classification: AI systems are classified into four risk categories: minimal, limited, high, and unacceptable, with corresponding obligations increasing with the level of risk.
- Penalties for non-compliance: Failure to comply can result in significant financial penalties, up to EUR 35 million or 7% of global turnover.
- Establishment of an EU AI board: This body will oversee the harmonised implementation of the Act across Member States.
High-risk AI systems
High-risk AI systems are subject to the strictest regulatory requirements. These systems include those used in critical sectors like healthcare, transportation, and public safety. Compliance requirements for high-risk systems include:
- Risk management: Establishing a comprehensive risk management framework.
- Data governance: Ensuring data quality, integrity, and security.
- Technical documentation: Maintaining detailed records of the AI system’s functionality and compliance.
- Transparency: Providing clear information about the AI system’s capabilities and limitations.
- Human oversight: Implementing measures to ensure appropriate human control and intervention.
Health sector considerations
The health sector is significantly impacted by the EU AI Act, particularly due to the inclusion of several high-risk use cases. These include AI systems for biometric categorisation, determination of eligibility for healthcare, and emergency patient triage.
Key impact areas
- Healthcare determination eligibility: AI systems evaluating eligibility for healthcare services are classified as high-risk, necessitating stringent compliance measures.
- Biometric categorisation: Systems used for biometric categorisation based on sensitive attributes must comply with high-risk AI requirements.
- Emergency patient triaging: AI systems prioritising emergency healthcare services will also fall under high-risk regulations.
In addition, the European Medicines Agency and the Heads of Medicines Agencies will begin preparing to support the implementation of the EU AI Act in 2024. As part of its multi-annual AI work plan, both agencies will begin developing AI guidance in the medicines lifecycle this year, including for specific domains, such as pharmacovigilance. They will also establish an AI Observatory.
Medical Devices Regulation (MDR) and In Vitro Diagnostic Regulation (IVDR)
AI systems classified as medical devices under MDR or IVDR must undergo third-party conformity assessments. The EU AI Act integrates these requirements, ensuring that AI systems used as medical devices comply with both the AI Act and existing medical device regulations.
Research and Development (R&D) considerations
The EU AI Act provides certain exemptions for AI systems used exclusively for scientific research and pre-market product development. These exemptions are designed to foster innovation while ensuring that real-world testing of high-risk AI systems adheres to stringent safety and compliance protocols.
Immediate actions for stakeholders
To prepare for the EU AI Act, stakeholders in the health sector should:
- Assess AI systems: Evaluate current and planned AI systems to determine their risk classification and compliance requirements.
- Develop compliance frameworks: Establish or update risk management, data governance, and technical documentation processes to meet the Act’s requirements.
- Engage with regulatory bodies: Stay informed about further guidance and engage with regulatory bodies to ensure alignment with compliance timelines and obligations.
- Plan for conformity assessments: Prepare for third-party conformity assessments where applicable, particularly for AI systems classified as high-risk under MDR and IVDR.
The EU AI Act represents a significant regulatory shift for the development and deployment of AI systems, particularly in the health sector. By understanding and proactively addressing the Act’s requirements, stakeholders can ensure compliance, mitigate risks, and continue to innovate safely and effectively. The immediate focus should be on assessing existing systems, establishing robust compliance frameworks, and staying engaged with ongoing regulatory developments.
If you would like to learn more about high-risks AI systems, the EU AI Act, and its implications for the health sector, please contact Trey Flowers at [email protected].