Human-Centred Explainability (HCE)
An investigation into how to design and implement explanations for AI/ML based decision-support tools to aid clinicians in healthcare with a human-centred focus.
The HCE project's focus was on creating user-friendly and understandable explanations to help significantly improve clinicians' ability to make informed decisions when using AI systems. This included patient-facing chatbots, mental health applications, ambulance service triage, sepsis diagnosis and prognosis, and patient scheduling, thus improving the performance and quality care in healthcare.
The challenge
There were two main challenges: the first is that clinicians using AI systems in practice don’t understand why the AI systems recommend such decisions which inhibits them from making informed decisions in patient care. The second is that the current explainable AI methods are developed by ML researchers who don’t necessarily understand what the clinicians really want.
The research
In the research process a three-stage framework was designed and tested via the use case of ICU patient weaning, in collaboration with clinicians, where the contextual design requirements were gathered and the requirements for explanations in such context were explored. From these requirements, the initial interface for the tool was designed, and the types of explanations on the interface have been shown to clinicians.
This project also helped to identify research gaps in this area. For example, it was unclear what human-centred explainable AI (HCXAI) is, based on the scoping review conducted, and what kind of requirements are needed for HCXAI.
The results
Consequently, the team was able to provide a definition of HCXAI to support the research community. This definition is: “HCXAI is a design process which aims to support the understanding of an AI system via creating interactions with the AI that are appropriate for the given context and this is achieved by implementing explanations enabled by XAI techniques” (AAIP Demonstrator Review 2024). The team has also conducted three-round interviews with clinicians and a series of workshops with a multidisciplinary researcher to develop and clarify the requirements for HCXAI and the evaluation metrics for such systems.
- Yan Jia, John McDermid, Nathan Hughes, Mark Sujan, Tom Lawton, Ibrahim Habli. “The need for the human-centred explanation for ML-based clinical decision support systems” (June 2023).
- Nathan Hughes, Yan Jia, Mark Sujan, Tom Lawton, Ibrahim Habli, John McDermid. “Contextual design requirements for decision-support tools involved in weaning patients from mechanical ventilation in intensive care units” (July 2024).