Autonomous Capabilities and Trusted Intelligent Operations iN Space (ACTIONS)
Assuring autonomy in space: improving the utilisation of satellites through the safe introduction of autonomy.
The ACTIONS project focused on fire detection carried out autonomously by a machine learnt (ML) component onboard a satellite. The project demonstrator generated a fire detection alert to emergency response services on the ground, with confidence that the data was accurate, truthful, and timely.
The challenge
Small satellites have limited resources and sparse opportunities for data capture. Autonomy offers significant improvements in the utilisation and timeliness of service to end-users of such systems. In an autonomous in-orbit fire detection and near-real-time emergency response application, these include:
- Rapid tagging and filtering of data - prioritise data which is confidently believed to include wildfire
- Alert generation - extract salient data (detection time, location and size of detected fire)
- Verification data generation - create ancillary data products such as image thumbnails or augmented visualisations
- Data reduction - selective compression to retain only valuable regions of data at full quality
- Responsive tasking - monitor wildfire on subsequent passes
But what level of trust can be placed in algorithmic operators versus ground-based human operators when onboard autonomy is introduced to satellite missions? With data analysis and operational decision responsibilities moved upstream and only intermittent ground station contact available to verify these autonomous activities, it is critical that such activities are rigorously assured and can be trusted within some reasonable limits.
The research
Using autonomous in-orbit fire detection to support wildfire emergency response as the driving application, this project considered the safety assurance of ML algorithms onboard small satellites:
- System design - model-based systems engineering (MBSE) was followed to develop, document and communicate the requirements and behaviour of the system. The team used the Capella tool to capture system behaviour and model the dataflow through the system, identifying the failure modes associated with the functional flow of the system.
- System safety requirements – missed detections, or misdirection of emergency services to attend non-fires both pose a risk. The team defined four system safety requirements in response.
- ML safety requirements - the system safety requirements were allocated and interpreted for the ML component.
- ML assurance – the AAIP’s AMLAS process was used to assure the safety of the ML. The team found that the assurance artefacts generated when following AMLAS are valuable for communication with customers and partners and building trust in the ML component.
- Hardware-In-the-Loop (HIL) simulation testing - the ML component was deployed in a simulated environment with target HIL across a set of defined operational scenarios.
- Processing results
- Mission results
- Burnt area detection
The results
The project delivered a demonstration system for autonomous wildfire detection and reporting, which the team tested in a realistic mission simulator. They developed and tested a commercial application of the ACTIONS mission, where data products generated onboard are used for ground-based burnt area detection to support the recovery of wildfire-affected areas.
-
1.1.3 Defining operating scenarios
-
2.2.1.2 Defining understanding requirements
-
2.3 Implementing requirements using machine learning (ML)
-
2.7 Using simulation
- Hawkins, R., Picardi, C., Donnell, L., and Ireland, M. "Creating a safety assurance case for an ML satellite-based wildfire detection and alert system". On Arxiv (2022).
Project partners
- Murray Ireland (principal investigator) Craft Prospect
- Lucy Donnell and Hazel Jeffrey (co-investigators) Craft Prospect
- Dr Richard Hawkins and Dr Chiara Picardi (co-investigators) University of York
- Stuart MacCallum, Mark Howie and Freddie Hunter (co-investigators) Global Surface Intelligence