Doctor Says No: When AI’s Bedside Manner Falls Short

News | Posted on Wednesday 4 September 2024

The integration of Artificial Intelligence (AI) into the healthcare sector affects everyone from patients and clinicians, to regulators and healthcare providers. Understanding the context and diverse risks of such an integration is critical to the long-term safety of patients and clinicians, and to the success of AI.

A shot of a clinician in blue scrubs who has brown skin with his hands on a tablet computer, stethoscope and documents.

As healthcare systems worldwide face increasing strain, AI is often heralded as a solution to bridge the gap between healthcare capacity and demand. AI-based technologies can process vast amounts of data and improve the accuracy and efficiency of medical care, potentially allowing clinicians to spend more time listening to patients and nurturing the patient-clinician bond.

But this positive prophecy will only come true if policymakers pay attention to how AI-based technologies are integrated into clinical environments, and how they interact with clinicians and patients.

Over the past year, a multi-disciplinary project between the University of York’s Assuring Autonomy International Programme (AAIP), now the Centre for Assuring Autonomy (CfAA), and the Bradford Teaching Hospitals NHS Foundation Trust, has sought to better understand this challenge.

The project, called Shared CAIRE (Shared Care AI Role Evaluation), is funded by the Medical Protection Society Foundation Grant Programme, and is supported by the UKRI-funded project Assuring Responsibility for Trustworthy Autonomous Systems (AR-TAS) . It researches how AI might work with clinicians and patients in the real world, and specifically how different models of bringing AI-based decision-support systems into healthcare settings impact the work and wellbeing of the clinician and their relationship with their patients. The Shared CAIRE cross-disciplinary team has a track record of working together on projects exploring human-computer interaction, the safety and ethics of AI systems in healthcare, and the liability implications of AI use, including the risk of clinicians becoming ‘liability risks’ for AI.

Healthcare providers, such as NHS Trusts, are generally keen to bring in new models of care and make use of AI technologies - but far too often systems which work well in the lab fail to translate into clinical practice. By testing the effects of different models with real clinicians, the aim of Shared CAIRE is to bridge the gap between the lab and the real world to aid the beneficial implementation of AI in healthcare contexts. Additionally, researchers hope that those involved in the decision-making process around implementing AI will also benefit from clarity around the inherent ethical and legal issues of the process. 

A female clinician with brown hair and white skin looks at a tablet with a female patient with brown hair wearing a light blue blouse and blue jeans They are sat in a consulting room.

What we explored

Shared CAIRE looks at one type of application of AI in healthcare – the AI-based decision support system – in two broad clinical scenarios - decisions on when a patient with diabetes might start using insulin, and on the use of caesarean section delivery in obstetric care. 

The standard or default model of how these systems are used by clinicians, is that the clinician considers the AI’s recommendation alongside information from other sources, including discussions with the patient, and then either accepts the AI’s recommendation as-is or overrides it with a decision they make themselves. 

In 63 simulated clinical conversations, between a doctor at registrar level or above and an actor playing a patient, Shared CAIRE explored how this standard model affects decision-making in the clinical consultation. We also explored, in simulation, five other models which deviate from the standard: one where the AI gives an explanation with the recommendation; one where the AI gives only an explanation; and two models where the patient interacts with a conversational AI before coming to see the doctor.

In order to make the simulations as realistic as possible, our human-computer interaction team members constructed a prototype of a clinical screen-based interface that closely resembled Electronic Patient Record systems used in the UK. The simulations were conducted as Wizard of Oz experiments, whereby the participant did not know until afterwards that the AI was not real. This work has received recognition in an Honourable Mention for the paper “Development and Translation of Human-AI Interaction Models into Working Prototypes for Clinical Decision-making” at the 2024 ACM SIGCHI Conference on Designing Interactive Systems.

CfAA Research Associate Dr Muhammad Hussain, lead author of the award-winning paper, said: "Converting theoretical human-AI interaction models into working prototypes is crucial for testing them in real-world settings. We have actively involved key stakeholders in our co-design process to ensure practical and impactful outcomes. Before implementing a model in healthcare, it is imperative that these models undergo thorough evaluation with clinicians using prototypes. I am pleased that our work has been recognised and awarded."

Thematic analysis of the interview transcripts is now underway. We are also conducting a review of the simulations by third party clinicians to assess whether the decisions made during the simulated consultations, when AI was involved, would be considered reasonable by a body of doctors - which may help to determine the liability implications of different models. Research has been informed by regular meetings with an insightful and engaged panel of patient representatives.

Looking towards the results

The project will have a diverse set of outputs. In addition to a set of clearly-defined alternative models for the use of AI decision-support systems in clinical consultations, and the working prototypes, the team is writing up academic papers exploring the qualitative results of the research, and its ethical and legal implications.

Shared CAIRE will also produce a white paper in the late autumn of 2024 with policy recommendations for regulators and healthcare providers to bridge the gap between system design and real world practice. The aim is to ensure AI technologies such as decision-support systems are integrated into clinical settings in ways that optimise the reasons for putting them there in the first place: to improve patient care, reduce burdens on the clinician, and to free up clinician time to engage in shared decision-making with their patient.

Overall, we expect the project to have an impact on informing design and deployment approaches which consider human behaviours in complex healthcare settings and enable humans and machines to genuinely work together, drawing on the strengths of each.

Links to papers:

  1. Clinicians Risk Becoming “Liability Sinks” for Artificial Intelligence
  1. Development and Translation of Human-AI Interaction Models into Working Prototypes for Clinical Decision-making:

https://dl.acm.org/doi/abs/10.1145/3643834.3660697

A man with black hair, brown skin, black moustache and beard dark glasses wearing a blue suit and light blue shirt. He is holding an 'honourable paper award' certificate from 2024 ACM SIGCHI Conference on Designing Interactive Systems.

Dr Muhammad Hussain holding the Honorable Mention award ( 2024 ACM SIGCHI Conference on Designing Interactive Systems) for the paper titled "Development and Translation of Human-AI Interaction Models into Working Prototypes for Clinical Decision-Making".