Five Steps to Reduce Generative AI Risks in Healthcare

April 1, 2024

Artificial intelligence (AI) is machine-displayed intelligence that simulates human behavior or thinking and can be trained to solve specific problems. Open AI’s ChatGPT, Google’s Gemini, and Microsoft’s Copilot are examples. The American Medical Association uses the term “augmented intelligence” as a conceptualization that focuses on AI’s assistive role as a tool, emphasizing that its design enhances human intelligence rather than replaces it.

Many physicians and healthcare organizations are already using AI to help with administrative tasks, provide clinical decision support for diagnosis, and reduce medication dosage errors. In their survey, Clinician of the Future Report, Elsevier Health reported that 73% of surveyed physicians say it is desirable that they be digital experts and 48% find it desirable for physicians to use AI in clinical decision-making.

Potential benefits and risks of AI use in healthcare

The potential benefits include examples such as:

  • Improved diagnosis with AI analysis of images for disease detection
  • Medication dosage error reduction
  • Optimizing treatment regimens based on patient profile
  • Administrative workflow efficiencies such as scheduling and AI scribes

The risks of AI include but may not be limited to:

  • Inaccurate output, varying output, and inconsistent output
  • Lack of validation, source, and credibility of outputs
  • Bias in training databases
  • Data and cyber breaches
  • Patient concerns about transparency and privacy
  • Lack of knowledge, policies, regulation, and accountability for errors due to AI
  • Medical malpractice liability
Potential malpractice allegations for using AI or not using AI

Potential AI-related malpractice risks for physicians may be related to the use of AI or the failure to use AI that is known to improve care:

  • Delayed diagnosis and/or failure to diagnose: Allegations that the use of generative AI might have sped up the time needed to arrive at an accurate diagnosis or might have guided the physician toward a proper diagnosis, thus averting the adverse outcome, or that the use of AI caused an inaccurate diagnosis.
  • Surgical treatment errors: Allegations that the use of AI might have forewarned the surgeon about potential complications or could have identified patient-specific risks that weren’t otherwise foreseen or failed to be conveyed to the patient, or the use of AI-enabled surgical robots caused a patient injury during a procedure.
  • Improper medical treatment or delay in treatment: Allegations that the use of AI might have forewarned the physician or identified potential complications of a proposed treatment.
  • Improper patient monitoring: Allegations that the use of AI might have recognized a worsening medical condition or a worsening trend that the physician missed.

Hospitals, health systems, and clinics may also have liability for failing to exercise due care in selecting AI technology, introducing, using, or maintaining it. Healthcare organizations may also be held vicariously liable for errors in the use of AI by their employed physicians and care team members.

Five steps to reduce risk

  1. Engage physicians in conversations about AI; chances are they are already using it in some form.
  2. Establish a multidisciplinary team to review AI software, services, or devices, conduct demos before purchase, and perform failure mode and effects analyses (FMEA) tabletop exercises before implementation.
  3. Develop policies and procedures for acceptable use of AI: when and how to use it.
  4. Provide awareness and skills training on use and how to identify/report malfunctions or accuracy issues (bias).
  5. Add AI documentation standards to current documentation policies and physician interaction, particularly when an AI recommendation is rejected.

Watch our recent webinar, Is ChatGPT the New Dr. Google? Understanding Risks of Generative AI, featuring Curi experts Jason Newton and Margaret Curtin, to learn more about AI benefits, risks, and risk strategies.

Curi’s risk mitigation resources and guidance are offered for educational and informational purposes only. This information is not medical or legal advice, does not replace independent professional judgment, does not constitute an endorsement of any kind, should not be deemed authoritative, and does not establish a standard of care in clinical settings or in courts of law. If you need legal advice, you should consult your independent/corporate counsel. We have found that using risk mitigation efforts can reduce malpractice risk; however, we do not make any guarantees that following these risk recommendations will prevent a complaint, claim, or suit from occurring, or mitigate the outcome(s) associated with any of them.

Share this blog article:

Latest Blog Articles

Five Steps to Reduce Generative AI Risks in Healthcare

AI is already assisting physicians and healthcare organizations in many ways. Learn how its use may impact liability and what strategies can mitigate risk.

Five Steps to Reduce Obstetrical Errors and Malpractice Claims

Learn how to reduce obstetrical harm using evidence-based protocols for managing high-risk situations, joint team fetal monitoring education, and enhanced teamwork.

How to Reduce Surgical Harm and Malpractice Claims

In an analysis of our medical professional liability claims, surgical allegations are #1 in occurrence and #2 in cost. Learn how to reduce surgical malpractice risk.