Staff Spotlight: Dr Yan Jia
Dr Yan Jia is a lecturer in AI Safety and part of the CfAA research team. In our latest staff spotlight, she explains more about her work in safety assurance and she brings together her niche background in safety-critical healthcare applications to advance this important area of research.

Can you tell us about your research interests?
My research interests lie in the safety assurance of AI-based systems, especially for healthcare. Such systems pose unique safety challenges due to their inherent complexity, and the high-stakes nature of medical applications. I have been actively involved in and contributed to projects focused on the safe development and deployment of AI-based clinical decision-support systems with the goal of ensuring patient safety whilst also improving healthcare efficiency.
What first inspired you to work in this area?
My first exposure to AI safety assurance came through Prof Ibrahim Habli and Prof Tom Lawton (an ICU consultant), whose funded PhD opportunity launched my journey into this field. This experience opened my eyes to the critical challenges and opportunities of AI in healthcare, particularly the gap in addressing safety of AI-based systems. I realised that it is essential to take an interdisciplinary approach combining safety engineering principles, AI knowledge and domain specific expertise like healthcare to address the safety assurance of AI systems. Currently there is a lack of experts with this unique integrated skillset, which positions my research in a distinctive and impactful space and I am deeply committed to it.
Your projects often place you at the forefront of enhancing the interpretability and safety of AI in clinical settings, why is this area particularly important to you and those it affects?
These AI systems have a direct impact on clinician trust and patient outcomes, making it crucial to enhance their interpretability. Specifically, providing user-centered explanations to clinicians can help them to understand how the AI system operates and how a particular prediction has been derived by the AI, so that they can interact with the AI systems better and make informed decisions. This is especially important as the current human-AI interaction model primarily places the clinicians as safeguards against potential AI errors, treating AI systems as solely clinical decision support systems where clinicians are responsible for making the final decision.
There is also ongoing research exploring the use of AI systems to autonomously report certain patient cases – such as benign cases – to optimise the use of clinicians’ time and improve the overall diagnostic performance. Notably, such use of an AI-based system has been approved by the regulators in Europe this year.
What are the biggest challenges in your area(s) of research and why?
One of the key challenges in my research is the need to continuously acquire new domain-specific knowledge for each project I undertake. For example, I have worked in diverse areas such as histopathology-based cancer diagnosis, sepsis treatment, and weaning patients from mechanical ventilation in ICUs. While this demands constant learning and adaptation, it also makes my research more engaging and rewarding as it pushes me to expand my expertise and keep learning.
Finally, where can we find you when you’re not working?
When I'm not working, you can usually find me enjoying music, spending time with friends and family, or exploring new places. I also enjoy watching films, which helps me recharge and stay inspired for my research.
Read Dr Jia's latest papers:
The BIG Argument for AI Safety Cases