Safety of the AI clinician
Moving on from prediction: the safe use of AI in making medical decisions about sepsis treatment.
This demonstrator project investigates the safety assurance of an AI-based decision support systems for sepsis treatment in intensive care, helping establish general regulatory requirements for these systems and using human expert knowledge to define safety rules.
The challenge
Many AI-based healthcare systems have already been approved for clinical use (e.g. by the FDA). But these mainly focus on replicating predictive tasks usually performed by humans, such as classifying skin lesions or predicting renal failure.
The challenge is in developing an AI-based decision support system (DSS) that can suggest medication doses, supporting a clinician to make a decision about medical care.
The research
The team at Imperial College London was the first to develop an algorithm (the AI Clinician) that provides suggested doses of intravenous fluids and vasopressors in sepsis. This demonstrator project is investigating how to assure the safety of an AI-based DSS for sepsis treatment in intensive care. Through this, it will help to establish general regulatory requirements for AI-based DSS.
The project is structured around three key objectives:
- Review regulatory requirements in the UK and the USA
- Define the required behaviour of the AI-based DSS for sepsis treatment
- Deploy and test the DSS in pre-clinical safe settings
The progress
The team defined five scenarios that correspond to likely unsafe decisions and compared the performance of the AI and human clinicians in these situations. They also mapped the AMLAS (Assurance of Machine Learning in Autonomous Systems) framework onto the AI Clinician application.
They then demonstrated how human expert knowledge could be leveraged to define safety rules and how those rules could help assess and improve the safety of AI-based clinical DSS. They compared how often and under what circumstances human clinicians and the AI Clinician would have broken a number of ICU expert-defined safety rules. They also improved the AI agent by modifying the reward signal during the training phase and added intermediate negative rewards each time those safety rules were breached. The team's work demonstrated that the newly trained model was even safer than the initial AI Clinician with respect to the considered safety scenarios.
The team has ran a simulation study in a live high-fidelity ICU simulation suite, which tests the behaviour of 40 human doctors of various levels of seniority when presented with an AI-based clinical support system. They test what factors influence doctors to trust or question the suggestions made by an AI in such an environment. Human factor experts Professor Peter Buckle and Dr Massimo Micocci from Imperial College London were involved in the design of the protocol.
The ethical implications of using AI in healthcare are being considered in collaboration with ethics specialist Michael McAuley, with a particular focus on the link between ethical implications and levels of autonomy of a system.
Papers and presentations
-
Festor, P., Luise, G., Komorowski, M., and Faisal, AA. “Enabling risk-aware reinforcement learning for medical interventions through uncertainty decomposition” in Interpretable Machine Learning in Healthcare (IMLH) at the ICML 2021 Workshop
- Festor, P., Jia, Y., Gordon, A.C., Faisal, A.A., Habli, I., and Komorowski, M. "Assuring the safety of AI-based clinical decision support systems: a case study of the AI Clinician for sepsis treatment" in BMJ Health and Care Informatics (July 2022)
-
Festor, P., Habli, I., Jia, Y., Gordon, AC., Faisal, AA., and Komorowski, M. “Levels of autonomy and safety assurance for AI-based clinical decision systems”, 4th International Workshop on Artificial Intelligence Safety Engineering (WAISE), September 2021
-
Jia, Y., Lawton, T., Burden, J., McDermid, J., and Habli, I. "Safety-driven design of machine learning for sepsis treatment" in Journal of Biomedical Informatics, March 2021
-
McDermid, J., Jia, Y., Porter, Z., and Habli, I. "AI explainability: the technical and ethical dimensions" in Philosophical Transactions.
-
Jia, Y., McDermid, J., and Habli, I "Enhancing the value of counterfactual explanations for deep learning" in AIME 2021: Artificial Intelligence in Medicine in Europe Panel discussion in an AI Med (Artificial Intelligence in Medicine) “Clinician Series” webinar, 31 March 2021.