Assuring the use of AI in wildfire management
Dr Richard Hawkins, Senior Lecturer in Computer Science explores how assuring the use of AI may help in the fight against wildfires.
Over the last five years the rate, intensity and duration of wildfires has continued to increase. As global temperature records are broken year on year, wildfires have ravaged over 7 million acres of land in the USA since 2000, whilst between 2002 and 2016 wildfires have decimated an average of 423 million hectares each year across the African continent. In England over 25,000 wildfires were recorded in 2022.
The global situation is serious; according to the United Nations Environment Programme (UNEP) extreme wildfires will increase by 30% by 2030, and 50% by 2100. Finding ways to detect and manage wildfires is now a critical challenge. One approach is to use artificial intelligence (AI) by incorporating machine learning (ML) components in satellite and early warning detection systems that are capable of detecting wildfires in images collected onboard the satellite. How we can do this safely, however, is something we at the Assuring Autonomy International Programme have been working towards.
In a newly published paper we presented the first safety assurance case for an ML wildfire alert system, but how did we get to this stage and how can it be applied in real-world scenarios?
Understanding safety assurance cases
Safety assurance cases provide a way to show that a system is safe enough to use by presenting the argument for why it is believed to be safe, and providing evidence to support that argument. In the case of the ML wildfire detection, we use the AMLAS approach that we created to generate evidence throughout the model learning process and show how this gives us confidence that the ML component will perform sufficiently well at detecting wildfires whilst avoiding false positives.
There are a number of safety issues associated with the detection of wildfires. The most obvious is that if a wildfire is not detected in a timely manner, then its potential to spread and cause widespread damage increases, potentially endangering lives and also increasing the environmental impact. However, reporting wildfires in places where they do not actually exist (so called “false positives”) could also be dangerous as they could cause the unnecessary deployment of fire crews to dangerous locations and make resources unavailable to respond to real fires. Before we deploy ML components to detect wildfires we need sufficient trust and confidence that they will be safe to use.
By explicitly documenting the safety case for the ML component it makes it easier for assessors and regulators to understand why the component is trusted to operate. This also makes it easier to review and challenge the safety of the system prior to its operation.
The challenges of a safety assurance case for wildfire detection
In our experience, the most challenging aspects of assuring the safety of ML lie in understanding and correctly specifying the safety requirements for the ML component, and in demonstrating that the data used during model development is sufficient. For the wildfire detection application, the nature of the operating environment makes this a particularly interesting challenge. We must be very clear, as part of the safety case, as to what the required scope of operation of the wildfire detector is and which features of the environment are important. These environmental features are important not just for specifying requirements on data, but also in defining test cases for the ML component. For wildfire detection we were interested in factors such as the size and intensity of the fire, the level of cloud cover, the type of land being observed and the time of year. All of these factors affect the wildfire images obtained by the satellite and therefore the performance of the model.
Collecting data that was suitable for model learning was a considerable challenge. We used historical images of known wildfires to train and validate the models, and an important part of the safety case was justifying the sufficiency of the datasets used, since these determine the performance of the model. In particular we needed to ensure that from the training data we were able to create a component that “generalised”, which means that it works well for the wide range of images that might be seen when the component is in operation on the satellite.
The real-world impact of AI in wildfire management
The AAIP’s work in this area represents the first time that an explicit safety case has been created for an ML component in relation to wildfire management. This is an important step in being able to deploy AI in real-world applications. It is only through explicitly documenting the argument and evidence in a safety case that we can demonstrate the assurance required in AI components. Without this, the benefits that AI can bring to tasks such as wildfire detection cannot be fully realised.
However, we have to remember that, as with most applications of AI, the ML wildfire detection component is one small part of a larger wildfire response system that entails the satellite and its sensors as well as the ground station, the communications technology and the emergency services. The ML safety case must be considered in this broader context, and how trust is maintained in the overall wildfire response capability. In other research we have shown how the overall safety case for the larger AI system as a whole can be constructed.
One of the ongoing considerations for our research is how the safety case for AI supported systems like this is maintained throughout its lifetime, including thinking about how safety is affected by changes in the system or the way in which it operates.
Have a research project you’d like support with?
Our wealth of academic resources and state-of-the-art facilities can help you reach your unique research goals. The AAIP and the University of York are committed to fostering innovation through our interdisciplinary approach. Learn more about our pioneering research or contact our Research and Innovation manager to discuss your research project needs.