The New Frontier of Health Law: Artificial Intelligence, Physician Liability, and the EU AI Act
- Dr. Erdal Hanyaloğlu
- Oct 23
- 4 min read
The healthcare sector is on the verge of a profound transformation, powered by artificial intelligence (AI). In medical decision-making, from predicting cardiovascular disease risks to alleviating administrative burdens, AI promises revolutionary benefits for both physicians and patients. However, this technological leap brings complex questions for medical law and health law professionals: Who is responsible when an AI makes a mistake? How will patient rights and data privacy be protected? Where does the physician's role and responsibility begin and end?
The answers to these questions are beginning to take shape with the European Union's (EU) Artificial Intelligence Act (AI Act), adopted in 2024 and in force since August 1, 2024. This act stands as the most specific legal document to date regulating AI development and usage. As H&A Law Office, we are examining the impact of this new legal framework on medical law and physician liability.
The EU AI Act: A "High-Risk" Classification for Healthcare
The EU AI Act employs a risk-based classification system. In healthcare, software intended to provide information used for making decisions with diagnostic or therapeutic purposes is often classified as Class IIa or higher under the Medical Device Regulation (MDR). This means the vast majority of AI systems used to support medical decision-making will be classified as "high-risk" under the EU AI Act.
This "high-risk" designation imposes strict rules on the "providers" who develop these systems. However, the Act also places significant responsibilities on the "deployers" (i.e., physicians and healthcare institutions) who use these systems in clinical practice. The article highlights five key responsibilities for physicians.
1. The New Duty of the Physician: Data Privacy and Medical Confidentiality
Training, testing, and improving AI requires vast amounts of health data. This creates serious risks regarding the confidentiality and security of patient data. The GDPR (General Data Protection Regulation) and the newly adopted European Health Data Space (EHDS) are the primary regulations in this field. While the EHDS aims to facilitate access to data for scientific research and algorithm development, this also heightens privacy concerns.
Physicians and healthcare institutions must protect medical confidentiality, the cornerstone of patient trust. It is critical that patients have control over how their data is used and, at a minimum, a clear right to opt-out of sharing their data with third parties not directly involved in their treatment.
2. A New Dimension of Informed Consent: "AI Will Be Used in Your Treatment"
Informed consent, a cornerstone of medical law, gains a new dimension with AI. Does the patient need to know that an AI system is being used in their treatment? While the AI Act does not directly target patients, it emphasizes that the physician, as the "deployer," must understand the system's capabilities and limitations.
While the patient is not expected to understand the technical details of how the AI works, they should be informed in general terms about why the system is being used, its limitations, and whether it involves any special medical risks. This information is essential to protect the patient's right to self-determination.
3. The Obligation of Technical Knowledge and the "Automation Bias" Danger
Article 14 of the AI Act places significant responsibilities on physicians under the heading of "human oversight". The physician must:
"Properly understand" the system's relevant capacities and limitations.
Remain aware of the possible tendency to automatically rely or over-rely on the AI's output ("automation bias").
Be able to "correctly interpret" the system's output.
Be able to decide not to use the system or to interrupt it.
This does not mean physicians need to know how to code. However, it does mean they must have sufficient knowledge of the risks of the AI they use, the situations in which it might produce errors, and the inherent lack of explainability in many "black-box" systems.
4. The Use of AI Outcomes in Medical Practice: The "Use, Unless..." Approach
To what extent can a physician trust the output generated by an AI? The system cannot replace the physician; the physician must remain the expert in the decision-making process. While fully autonomous operation is prohibited, expecting the physician to re-verify every AI result from scratch is neither feasible nor desirable, as it would negate the efficiency AI provides.
The article's authors suggest a reasonable "use, unless" approach. The physician uses the AI output as if it were a guideline or protocol. However, if additional information, test results, patient preferences, or other specific circumstances dictate otherwise, the physician must deviate from that output.
5. The Physician's Oversight Duty: Detecting and Addressing Bias
AI systems can reflect and even amplify existing societal prejudices (e.g., based on gender or ethnicity) present in their training data. Algorithmic bias can lead to discrimination claims in medical law.
The AI Act expects physicians (deployers) to monitor the system and be able to detect and address "anomalies, dysfunctions and unexpected performance," including bias. This means physicians must now actively monitor not only the patient but also the technology they are using. They have a responsibility to communicate newly identified or increased bias to the system's provider (developer).
Medical Malpractice and Legal Liability:
If an error occurs in an AI-assisted diagnosis or treatment (medical malpractice), who is liable? The developer, the hospital, or the physician? This is a complex legal question. The EU is working to fill this gap with the revised Product Liability Directive (rPLD) and the proposed AI Liability Directive (AILD). These new regulations aim to make it easier for patients to seek redress, taking into account factors like the complexity of "black-box" systems and automation bias.
As Hanyaloglu & Acar Law Office, we provide legal guidance in this new and complex field where health law and artificial intelligence technologies intersect. We stand by our clients—healthcare institutions, physicians, and technology providers—on issues of compliance with the EU AI Act and national legislation, the redefinition of physician liability, and preparation for potential medical malpractice lawsuits.