
Author: Camilla Cappa
Edited by: Sonya Bashir
Until a few years ago, the European Union (EU) merely theorised the deployment of Artificial Intelligence (AI) in healthcare. Today it has become a tangible reality, and AI could become a useful tool to reshape healthcare delivery across Member States. As healthcare is a shared competence between the EU and its Member States, it has the authority to carry out actions to support, coordinate or supplement the Member States regarding healthcare, as stated in Articles 4, 6 and 168 of the Treaty on the Functioning of the European Union (TFEU). Therefore, the Union’s interventions could be crucial in developing an effective and efficient implementation of AI, especially in clinical practice.
The EU legislation regarding AI in Healthcare
A significant first step in this regard is the AI Act, which entered into force on 1 August 2024 and will be fully applicable on 2 August 2026. Together with the Digital Markets Act and the Digital Services Act, this regulation aims at coordinating AI deployment, while encouraging responsible innovation in the European framework. The AI Act adopts a risk-based approach, as there is no regulation of technology itself, but of high-risk practical cases of AI implementation.
As stated in Article 1 of the regulation: “The purpose of this Regulation is to improve the functioning of the internal market and promote the uptake of human-centric and trustworthy artificial intelligence (AI), while ensuring a high level of protection of health, safety, fundamental rights enshrined in the Charter, including democracy, the rule of law and environmental protection, against the harmful effects of AI systems in the Union and supporting innovation.” This legal framework results crucially in AI development, it ensures an ethical, safe and efficient use of technology while safeguarding individuals and fostering technological progress within the EU.
Additionally, the One Health strategy and the European Health Data Space Regulation (EHDS) contribute to this ecosystem. The first regulation considers an integrated, unifying approach that aims to sustainably balance and optimise the health of people, animals, and ecosystems; on the other hand, the second regulation guarantees access to diverse and high-quality health data to ensure accuracy, robustness, and fairness across different populations.
The Challenges of AI implementation
The structure created by the legal European framework is therefore the first step to creating a system that enhances patient safety as well as ethical and equitable AI implementation. Nevertheless, there are many challenges that are slowing down the development of new technologies in clinical practice.
The first challenge the European Union must address is building trust and acceptance of AI in healthcare delivery. AI is a powerful tool that, combined with human capabilities, could produce astounding results; at the same time, most of the population is concerned about its capability to make important decisions that would be perceived as fair by humans. Building trust might be one of the hardest challenges to face, especially because its success depends on being able to provide full transparency of data handling, as well as accountability and responsibility for the system providers.
Once AI systems result in transparent, ethical, free of bias and are provided with appropriate data, the producers of said systems gain public trust, resulting in a broadened sense of trustworthiness in AI. This is the reason why the EU needs to guarantee a proper assessment and evaluation of new AI technologies.
The goal is to make AI systems that are functionally effective and socially acceptable. To this end, the EU must face the second challenge: the development of AI technologies that respect the risk pyramid of the AI Act, developed on four levels.
The first level prohibits eight practices defined as “unacceptable risk”, including harmful AI-based manipulation, deception and exploitation of vulnerabilities, in addition to social scoring, emotion recognition and biometric categorisation. The second level includes the “high risk” use cases, in particular AI-based safety components of products that could interest, for example, AI applications in robot-assisted surgery. On the third level, we find the “limited risk” use of AI that refers to transparent and recognisable AI-generated content. Finally, on the fourth level, there is “minimal or no risk” use of AI, which includes 80% of the AI systems currently present on the European market and subject to no AI act regulations.
Logically, most of the AI systems that could be developed for healthcare implementations would fall under the “high risk” category, having to undergo a series of procedures and assessments. In practice that means developing the AI system, ensuring it complies with AI requirements, registering the system in an EU database, signing a declaration of conformity and ensuring it bears the CE marking before the system can be placed on the market. Additionally, if any changes should occur in the AI system’s lifetime, it should undergo the procedure once again. It is therefore easily understandable how this process slows down significantly the development of AI technology that can be used in healthcare.
Even when AI systems have met all legal and technical criteria, a crucial question arises, constituting the third challenge: how can AI technology be effectively integrated and funded?
The EU must address this problem by providing sustainable financing mechanisms for AI adoption, especially in public hospitals, to ensure availability in the entirety of the Member States. Additionally, the Union needs to implement regulations to modernise the healthcare infrastructures and ensure that AI is viewed not merely as a tool, but as a strategic asset that enhances care delivery.
Practical Application of AI systems in Healthcare
Artificial intelligence offers the potential to provide personalised care that, combined with human capabilities and expertise, could potentially revolutionise the medical field. But what does that mean in a practical approach?
Imagine being able to implement AI systems in the prevention, diagnosis and prognosis of diseases; if there were a machine able to elaborate images and genetic parameters of patients, doctors could prevent even the worst illnesses.
This is something that is not available right now, but that could be developed in the future, which seems realistic given the progress that’s been made in the past few years.
Within the European framework, the Commission department responsible for EU policy on food safety and health and for monitoring the implementation of related laws (Director General for Health and Food Safety – DG SANTE), is implementing a series of initiatives under the framework of AICare@EU.
AICare@EU is an initiative that adds to the foundation provided by the EU legislation and focuses on the deployment of AI in healthcare, particularly in clinical practice.
The project investigates the challenges and enablers of deploying AI in clinical practice in four main areas: technological and data-related challenges, legal and regulatory barriers, organisational and business obstacles, and social and cultural factors.
Additionally, it takes care of the AI and Health Data Access Bodies, the SHAIPED project, which starting in March 2025 pilots the development, validation and deployment of AI models using the HealthData@EU infrastructures of the European Health and Data Space (EHDS).
Regarding practical applications, AICare@EU, explores AI key priorities aligned with the New Commission’s Political Guidelines 2024-2029 in applied AI strategy and prevention of cardiovascular diseases.
Applying an AI strategy means boosting new industrial uses of AI in a variety of public services, including healthcare. The Biotech initiative includes the development of a Biotech Act which explores pathways to accelerate AI development and deployment in the biotech sector, leveraging health and data security under the EHDS.
AI is revolutionising the prevention of cardiovascular diseases (CVDs); the focus is placed on prevention, diagnosis, treatment and rehabilitation as a priority area. From the patient’s perspective, AI can be applied for remote follow-ups, medication reminders, real-time counselling and early intervention; on the other hand, from the perspective of clinicians, AI can help collect information such as medical history and be used as support in reducing workload.
The case study of Rima Arnaout, an assistant professor at the University of California, San Francisco, proves how AI, if implemented ethically in the medical field, can outperform human clinicians. Arnaout built convolutional neural networks by using the echocardiographies of 267 patients (age range: 20–96 years) between 2000 and 2017 from the university medical centre. From this perspective, 223,000 images were divided into fifteen categories; something no human clinician was able to do up until this point, proving the effectiveness of AI in the classification of cardiac ultrasound images.
In sum, the integration of AI into healthcare represents one of the most transformative developments in modern medicine, offering the potential to enhance prevention, diagnosis, treatment, and patient management. The European Union, through a comprehensive regulatory framework, including the AI Act, One Health strategy, the European Health Data Space (EHDS) and initiatives such as AICare@EU, has created the groundwork for ethical, effective and equitable AI implementation in all Member States.
However, the successful implementation of AI in clinical practice has a long way to go and hinges on overcoming more than one challenge. Addressing the issues requires coordinated and continuous action between EU institutions, national governments, healthcare providers and AI developers.
The case study of Rima Arnaout and ongoing projects such as SHAIPED demonstrate the responsible use of AI systems that can exceed human capabilities in specific clinical tasks. Moving forward, it is concluded that the EU must continue to support innovation while upholding its commitment to patient safety, transparency and data protection. This would help in transforming AI into a reliable pillar in the European healthcare system.
Bibliography
Treaties currently in force – EUR-Lex. (2016). Europa.eu. https://eur-lex.europa.eu/collection/eu-law/treaties/treaties-force.html?locale=en#new-2-52
Artificial Intelligence in healthcare. (2025, June 10). Public Health. https://health.ec.europa.eu/ehealth-digital-health-and-care/artificial-intelligence-healthcare_e n#aicareeu-deployment-of-ai-in-healthcare
One Health – European Commission. (2023, November 20). Health.ec.europa.eu. https://health.ec.europa.eu/one-health_en
Von Der Leyen, U. (2024). Europe’s Choice. https://commission.europa.eu/document/download/e6cd4328-673c-4e7a-8683-f63ffb2cf648_ en?filename=Political%20Guidelines%202024-2029_EN.pdf
European Commission. (2025, February 18). AI Act. European Commission. https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai
Health and Food Safety. (2024, December 20). European Commission. https://commission.europa.eu/about/departments-and-executive-agencies/health-and-food-s afety_en
Ghassemi, M., Naumann, T., Schulam, P., Beam, A. L., Chen, I. Y., & Ranganath, R. (2019). Practical guidance on artificial intelligence for health-care data. The Lancet Digital Health, 1(4), e157–e159. https://doi.org/10.1016/s2589-7500(19)30084-6
Busch, F., Jakob Nikolas Kather, Johner, C., Moser, M., Truhn, D., Adams, L. C., & Bressem, K. K. (2024). Navigating the European Union Artificial Intelligence Act for Healthcare. Npj Digital Medicine, 7(1). https://doi.org/10.1038/s41746-024-01213-6
Rossi, F. (2018). BUILDING TRUST IN ARTIFICIAL INTELLIGENCE. Journal of International Affairs, 72(1), 127–134. JSTOR. https://doi.org/10.2307/26588348 Yan, Y., Zhang, J.-W., Zang, G.-Y., & Pu, J. (2019). The primary use of artificial intelligence in cardiovascular diseases: what kind of potential role does artificial intelligence play in future medicine? Journal of Geriatric Cardiology: JGC, 16(8), 585–591. https://doi.org/10.11909/j.issn.1671-5411.2019.08.010

Shaping the future of healthcare: The European Union’s role in AI implementation
SEMICONDUCTORS AS KEY STRATEGIC ASSETS: NAVIGATING GLOBAL AND EUROPEAN SECURITY CHALLENGES
Chatrooms to Chapels: Including Europe’s religious communities in deradicalization
Spain’s Military Aid to Ukraine: A Middle Power’s Support under Structural Dependence and Reluctant Rearmament