Our research priorities

Our research focus is split into three primary areas which enable us to address the real-world challenges around AI and autonomous systems.

Lifelong safety assurance of AI-enabled autonomous systems

Under this pillar we're exploring whether the safety of autonomous systems be demonstrated and how to ensure they remain safe after deployment. Additionally, this research area is looking at how we can maintain confidence in their safety even if the world around them changes or the systems themselves evolve.

Examples of current research:

We are developing a robotics platform using multiple heterogeneous robots (ground and aerial vehicles, and robot arms) and will safety assure their use for autonomously monitoring and maintaining industrial assets such as a solar farm. This will allow us to explore through-life safety assurance, transfer assurance, assurance in uncertain environments and dynamic assurance for continuous learning. The platform and framework will be based on industry-standard hardware, software and practices, to ensure the research offers real-world value and insight.  

Contact us about safety assurance research

Human-centric design and assurance

How robotics, autonomy and AI safely augment human capabilities is a growing area of interest and focus. In our research we're looking at what AI explanations are needed for human-AI teams to work effectively and safely together.

Examples of current research:

  1. How safety case argumentation is constructed and communicated. Whilst there has been a focus on creating new or improving existing argumentation methods (such as Goal Structuring Notation), there has been less focus on who safety cases are for, and what people require from them to be persuaded of their validity. To address this we are taking a user-centred perspective to safety cases, to highlight how they can be improved to document and explain the complexities in AI-based technologies.
  2. Prototyping clinical decision-support tools for the ICU environment. Weaning from ventilation is a complex task which could be supported by AI-based decision-support tools. For such a tool to be useful, it is important to understand how the weaning task is currently performed by ICU clinicians, and what information clinicians would need to understand it. In this research we are exploring the current context of the ICU environment and the task of weaning patients from ventilation, and prototyping ways to provide useful explanations about AI to clinicians.

Get involved in our human-centric AI research

Informed governance

There are big question around AI governance which our research aims to address. Namely, how can utility-safety trade-offs be made in an informed manner and  what decisions about safety are socially acceptable. 

Examples of current research:

Which Explainable AI (XAI) methods are most effective for assuring safe, ethical and socially acceptable deployment of AI systems? Additionally, how can responsibility— especially moral and causal responsibility — be traced in AI-driven decision-making to foster public trust. We seek to develop frameworks that integrate these XAI techniques with accountability measures to enhance the social acceptability of AI technologies.  

Talk to us about our informed governance research 

 

Contact us

Centre for Assuring Autonomy

assuring-autonomy@york.ac.uk
+44 (0)1904 325345
Institute for Safe Autonomy, University of York, Deramore Lane, York YO10 5GH

Contact us

Centre for Assuring Autonomy

assuring-autonomy@york.ac.uk
+44 (0)1904 325345
Institute for Safe Autonomy, University of York, Deramore Lane, York YO10 5GH