Artificial Intelligence (AI) has catalyzed transformative changes across sectors, with healthcare and life sciences being no exception. However, the rapid pace of AI innovation has also sparked significant concerns about the ethical implications and potential risks associated with its deployment and use for consumers in the European Union (EU). The EU has proactively addressed these issues by introducing the EU AI Act, the first comprehensive legislation of its kind aimed at regulating AI technologies.

This article explores the Act’s implications on healthcare and life sciences.

Risk-based Classification of AI Systems

The EU AI Act classifies AI systems based on the risk they pose and introduces stringent guidelines to ensure their ethical and secure use. It categorizes AI systems into four risk levels: Unacceptable, high, medium and limited.

Unacceptable Risk

Unacceptable Risk

AI systems that significantly threaten safety, livelihoods and rights are banned outright. Examples include AI systems used for social profiling that could lead to racial biases in healthcare access.


Human Oversight

High Risk

These systems require rigorous oversight and compliance with strict standards, including data quality, transparency and human oversight. High-risk AI applications in healthcare could involve biometric identification and diagnosis support systems.


Medium Risk

Medium Risk

Systems like chatbots that engage with patients fall under this category. While less critical than high-risk applications, they still require substantial checks to prevent misuse.


Limited-Risk

Limited Risk

Minimal risk applications include spam filters and other less impactful systems. These face fewer regulatory requirements but must still adhere to basic standards.

The EU AI Act places particular emphasis on Generative AI (Gen AI) due to its broad capabilities and the possible risks associated with its misuse. Depending on its application, Gen AI falls under medium to high-risk categories. For instance, a generative model used to provide medical advice could be classified as high risk, requiring stringent oversight and compliance.

Implications for Healthcare and Life Sciences

AI has wider implications in the healthcare and life sciences industries. In healthcare, there are multiple use cases, including patient diagnosis, remote monitoring, chatbots and care recommendations. The life sciences industry also benefits from AI, with applications in drug discovery, research and development, and post-commercialization activity.

These industries have witnessed billions in investment directed toward AI-based tools and companies. Any AI system utilized for diagnosis, evaluation, treatment and monitoring that leverages vast amounts and diverse types of data will be classified as a “high-risk” system. Medical devices must already comply with existing regulations such as the Medical Devices Regulation (MDR) and In Vitro Diagnostic Medical Devices Regulation (IVDR), and the new AI Act’s requirements will be part of the assessment.

Data-Quality-and-Transparency

Data Quality and Transparency

The EU AI Act mandates strict guidelines on data quality, ensuring that AI systems in healthcare and life sciences are trained on accurate, unbiased and high-quality data. This is crucial for applications like clinical decision support systems, which rely on patient data to provide accurate diagnoses and treatment recommendations. Research and development in life sciences will increasingly leverage vast amounts of data for drug discovery and clinical trials. The Act also enforces transparency, requiring developers to disclose the data sources and methodologies used in training AI models. This enables organizations to have confidence in the results produced by AI systems.


Human Oversight

Human Oversight

A critical aspect of the EU AI Act is the requirement for human oversight, especially for high-risk applications. In healthcare, this means that while AI can assist in processing large volumes of medical data and generating insights, final decisions must be made by qualified healthcare professionals. For instance, AI can summarize patient records, but a doctor must review the AI-generated summary before making a clinical decision. This human-in-the-loop approach ensures that AI complements rather than replaces human expertise.


Accountability and Compliance

Accountability and Compliance

The Act introduces stringent accountability measures, holding organizations liable for any damage caused by AI systems. This is particularly important in healthcare, where the stakes are high. Organizations must implement robust compliance frameworks to ensure their AI systems meet the EU AI Act’s standards. Non-compliance can result in hefty fines, as high as Euro 35 Million or 7 percent1 of global turnover, whichever is higher. This underscores the importance of adhering to the Act’s provisions to avoid financial and reputational damage.


Case in Point: Impact on Drug Discovery and Clinical Trials

AI plays an instrumental role in drug discovery and clinical trials by analyzing vast datasets to identify potential drug candidates and optimize trial designs. The EU AI Act's regulations ensure these AI applications are transparent, unbiased and ethically sound. Compliance with the Act can streamline the drug development process. However, it also means that any AI-related processes must be rigorously validated by regulatory bodies before a new drug can be brought to market. While this can delay drug launches, with proper planning and adherence, post-commercialization timelines can be managed. It ultimately ensures patient safety and efficacy.

The use of AI and Machine Learning (AIML) for clinical management of patients, including medical devices, demands comprehensive compliance with regulatory standards. This includes ensuring that all AIML systems used in patient care are rigorously tested for accuracy, reliability and safety.

Recommendations for Organizations

The EU AI Act will be implemented in stages, with high-risk systems required to achieve compliance within six months after entry into force and full compliance expected within 36 months. This phased approach allows organizations to effectively adapt and align their AI systems with the new regulations.

At WNS, we've been collaborating closely with global life sciences and healthcare companies, guiding them in implementing robust data strategies, governance frameworks and compliance measures. Drawing from our extensive expertise and experience, here’s what businesses need to do to navigate the EU AI Act effectively:

Conduct a Comprehensive Risk Assessment

Identify which of your AI systems fall into the high or unacceptable risk categories and take immediate steps to comply with the Act’s requirements.

Implement Human Oversight

Ensure qualified professionals are involved in decision-making, particularly for high-risk AI applications.

Stay Informed and Adaptive

The AI landscape is dynamic, and so are the regulations. Continuously monitor updates to the EU AI Act and adjust your practices accordingly to remain compliant.

Enhance Data Governance

Implement robust data quality and transparency measures to ensure your AI systems are trained on accurate and unbiased data.

Strengthen Compliance Frameworks

Develop and maintain a comprehensive compliance strategy to adhere to the EU AI Act’s regulations, avoiding substantial fines and ensuring ethical AI deployment.

The EU AI Act represents a significant milestone in regulating AI technologies, particularly in the healthcare and life sciences sectors. By classifying AI systems based on risk and introducing stringent guidelines, the Act ensures that AI applications are safe, ethical and transparent. For organizations in these sectors, understanding and complying with the Act is not just a legal obligation but a critical step toward building trust and responsibly leveraging AI’s transformative potential.


About the Author

Anand Jha is a Corporate Vice President of Digital Transformation within the WNS Healthcare and Life Sciences business unit, based in India. With over two decades of experience, he has been instrumental in driving strategy, consulting, and digital and technology-enabled business transformations for leading Fortune 500 healthcare and life sciences organizations. Anand’s comprehensive expertise spans business consulting, technology strategy and implementation, data, AI / Generative AI, analytics, design thinking, customer experience and change management.


Talk to our experts

References

  1. The EU’s AI Act and How Companies Can Achieve Compliance | Harvard Business Review

Join the conversation