How can we develop autonomous vehicles that can explain the decisions they take?

The SAX project demonstrated world-leading research in mobile autonomy (including perception, localisation, and mapping) in challenging on-road and off-road driving scenarios. It addressed fundamental technical issues to overcome critical barriers to assurance and regulation for large-scale deployments of autonomous systems.

Project report

A full project report on the team's research in robust sensing and scene understanding, including motion estimation, localisation, object detection, and explanation generation

Final project report

The challenge

Understanding the decisions taken by an autonomous system is key to building public trust in robotics and autonomous systems (RAS). This project addressed this issue of explainability by researching autonomous systems that can:

  • sense and fully understand their environment
  • assess their own capabilities
  • provide causal explanations for their own decisions

The research

In on-road and off-road driving scenarios, the project team studied the requirements of explanations for key stakeholders (users, system developers, regulators). These requirements informed the development of algorithms that generate causal explanations.

The work focused on scenarios in which the performance of traditional sensors (e.g. cameras) significantly degrades or completely fails (e.g. in harsh weather conditions). The project developed methods that can assess the performance of perception systems and adapt to environmental changes by switching to another sensor model or a different sensor modality. For the latter, alternative sensing devices (incl. radar and acoustic sensors) were investigated in order to guarantee robust perception in situations when traditional sensors fail.

The results

This project presents a universal view of AV sensing requirements and how uncommon sensing modalities can be suitable for overcoming challenging operational scenarios.

The team showed that scanning radar is a tool that can rival vision and laser in scene understanding across all crucial autonomy-enabling tasks. They present an overview of AV training requirements and approaches to tackle the lack of specific sensing combinations or labels. They released a vast dataset in a variety of scenarios and conditions, comprising a full sensor suite and manually-annotated labels for odometry, localisation, semantic segmentation and object detection.

Different types of explanations were identified in challenging driving scenarios. The team characterised several dimensions for explanations and identified different stakeholders for which explanations are relevant. Furthermore, they developed methods and provide guidance for generating explanations using vehicle perception and action data in dynamic driving scenarios.

An autonomous vehicle sensing objects in its environment

How can explainability support the overall assurance of AVs?

Find out more from project PI Dr Lars Kunze

  • 2.2.1.1. Defining sensing requirements
  • 2.2.1.2 Defining understanding requirements

  • 2.3 Implementing requirements using machine learning (ML)

  • 2.6 – Handling change during operation

  • 2.8 Explainability

  • 3 Understanding and controlling deviations from required behaviour

Project partners