Ampovska, Marija (2025) Navigating Accountability: A Threshold Typology for Healthcare Worker Liability in AI-Assisted Medical Decision-Making under EU Law. In: 3rd International Conference on Decision Making in Medicine and Law: Opportunities and pitfalls of information technologies, 11-12 Dec 2025, University of Sapienza in Rome, Italy. (Unpublished)
Programme - 3rd International Conference on Decision Making in Medicine and Law.... Page 1 of 6.pdf - Supplemental Material
Download (14MB)
Presentation MA Rome rep.pdf - Presentation
Download (880kB)
Abstract
The accelerating integration of artificial intelligence (AI) into clinical practice, from
diagnostics to treatment support, introduces complex questions regarding the legal
accountability of healthcare professionals. This paper addresses the critical need for
clarity on liability allocation in AI-assisted medical decision-making within the evolving
European Union legal framework. It specifically investigates the precise thresholds at
which healthcare workers become legally accountable for potential harm arising from AI
integration.
Employing a comparative doctrinal analysis, this study examines key EU instruments: the
Artificial Intelligence Act (AI Act), the revised Product Liability Directive (PLD), the
proposed AI Liability Directive (AILD), the General Data Protection Regulation (GDPR),
and the Medical Device Regulation (MDR). These regulations are systematically
contextualized against established medical malpractice doctrines and contemporary legal
scholarship.
The central contribution is a "Threshold Typology" that delineates three distinct points of legal significance for healthcare workers. The regulatory threshold is defined by
compliance with the AI Act’s stringent ex ante obligations for high-risk medical AI systems,
particularly concerning human oversight, risk management, and documentation. Failure
to meet these duties informs negligence assessments. The professional threshold directly implicates national malpractice law, where healthcare professionals remain liable for
failing to exercise independent judgment and the requisite duty of care, even when relying
on AI advice. This highlights the human-in-the-loop imperative. Finally, the product
threshold engages strict liability principles under the PLD for defective AI-enabled medical
devices, or fault-based presumptions under the AILD, particularly when a healthcare
worker's actions amount to misuse or substantial modification of the AI system.
This typology clarifies the shift from compliant practice to potential liability, offering a
robust analytical framework for evaluating accountability. It provides essential insights for
legal experts, healthcare professionals, and policymakers striving to foster legal certainty,
uphold patient safety, and balance innovation with responsible AI deployment in the
critical domain of medicine and law.
| Item Type: | Conference or Workshop Item (Speech) |
|---|---|
| Subjects: | Social Sciences > Law |
| Divisions: | Faculty of Law |
| Depositing User: | Marija Radevska |
| Date Deposited: | 15 Dec 2025 08:43 |
| Last Modified: | 15 Dec 2025 08:43 |
| URI: | https://eprints.ugd.edu.mk/id/eprint/36952 |
