Share this Article
What Does AI Mean for Malpractice Risk?
There’s been a lot of attention lately given to artificial intelligence. The fourth iteration of ChatGPT was launched in mid-March, featuring deep-learning artificial intelligence that generates human-like responses. A few weeks later, a coalition of technology leaders, including Twitter and Tesla CEO Elon Musk and Apple co-founder Steve Wozniak, called for a moratorium on new experimentation with artificial intelligence, highlighting ethical and safety issues.
If Elon Musk says artificial intelligence stresses him out, it’s not surprising that the rest of us may have concerns.
What Does Artificial Intelligence Mean for Clinical Practice, and for Malpractice Risk?
Let’s start with what artificial intelligence means in non-technical terms. Artificial intelligence (AI) refers to machines performing tasks that would otherwise require human intelligence, such as problem-solving, reasoning, and speech recognition. Computer systems use algorithms and models to interpret data, learn from experience, and make predictions based on available information. Most of us have likely interacted with AI applications like Google Assistant, Siri, and Alexa, or customer service chatbots on banking and other institutional apps and websites.
In the clinical setting, AI might be used in the electronic health record to facilitate medical notetaking and provide clinical decision support, alerting physicians to issues and recommending treatments. AI is used to analyze medical images from X-rays, MRIs, and CT scans to detect cell changes, tumors, and other conditions, often with greater accuracy and efficiency than clinicians. And AI can support precision medicine by predicting patient risks and tailoring clinical recommendations based on a patient’s personal medical data and genome. Add to that the promise AI can bring to medical research, and it’s not hard to see why it is often described as having the potential to revolutionize medicine.
But AI is only as good as the data on which it is based; incomplete, inaccurate, or biased data can yield inaccurate results. And just like any system, its use can be subject to human error. Learning to effectively use new systems takes time. Data input must be careful, accurate, and complete. And the systems must be organized so that necessary information is easily and clearly accessible without inundating clinicians.
What AI means for malpractice risk still largely remains to be seen and will evolve as acceptance of AI grows. The increasing reliability and use of AI in medicine will influence what are considered standards of care. Changing comfort levels among the public can influence jury opinions on whether a physician’s decision to rely or not rely on AI is reasonable.
AI in Medicine and the Impact on Medical Malpractice Policies?
Artificial intelligence that increases diagnostic accuracy, timeliness, and effectiveness of treatment and improves patient outcomes may well favorably impact liability. However, overreliance on AI could lead to increased risk in clinical practice, creating new and emerging areas of risk impacting both litigation patterns as well as insurance policy exclusions. Physicians’ clinical judgment is crucial; AI can provide data and insights that have value in informing clinical decisions and directions.
Ultimately, AI cannot replace human interactions — listening with empathy, observing nonverbal cues, and communicating compassionately — which are key to the physician-patient relationship and influence patient outcomes and trust.
It’s certainly an exciting time in medicine. We’ll continue to follow developments and look forward to reading NEJM AI, the New England Journal of Medicine’s new journal publishing research and policy perspectives on artificial intelligence in medicine.
Further Reading:
Artificial Intelligence in Medicine | NEJM
Artificial Intelligence and Liability in Health Care (case.edu)