Spotlight profile: Professor Simon Burton

News | Posted on Thursday 4 July 2024

Our Chair of Systems Safety and Business Lead for the Centre for Assuring Autonomy shares a little on his background, his hopes for future opportunities and the top three things companies need to know about safe AI.

A head and shoulders image of Centre for Assuring Autonomy Business Lead Professor Simon Burton. He is a white man, wearing a suit and glasses and is smiling directly into the camera.

Can you tell us a little about your area of research?

My research explores the intersection of systems safety engineering, artificial intelligence (AI) and the legal/ethical and regulatory considerations necessary to form convincing safety assurance arguments for complex, autonomous and AI-based systems. This includes pursuing the following questions:

  • How safe is safe enough? What is the application-specific definition of safe or trustworthy? How can the specific safety requirements on the system be derived and expressed? This requires an inter-disciplinary view combining theoretical, engineering, governance, and ethical perspectives.
  • How to engineer safe, autonomous systems? Which technologies and engineering methods can be applied to fulfil the safety requirements for any given task? This work combines an understanding of the basic concepts of AI and machine learning, safety analysis and systems engineering.
  • How to argue that an appropriate level of safety has been met? Which combination of analysis, test and other sources of evidence can be used to formulate a convincing argument that the safety requirements of the system have indeed been fulfilled? This includes an investigation of the residual uncertainty in such arguments, for example, based on belief theory and statistical properties of safety evidence.

How will that impact a domain such as automotive?

The automotive sector sees the increase in driver assistance and automated driving functions as a key step to further improving road safety. These functions are increasingly reliant on advanced AI technologies based on machine learning for performing tasks such as perception and planning. Understanding and managing the balance of increased risk related to new technologies and their expected safety benefit requires an informed debate from a regulatory and broader societal perspective. Furthermore, at a technical level, we require improved methods of specifying and evaluating safety criteria that consider the complexity of the system, its technological basis (e.g. AI), as well as the tasks to be performed and the environment in which it should operate.

My research is developing methods that address these challenges and thus help to progress the safe and trustworthy introduction of automated vehicle functions. In addition, as the convenor of an ISO working group on safety and AI for road vehicles and the project lead for the first standard in this area (ISO PAS 8800), I hope to support industry, as well as regulators, to ensure that vehicles deployed with this technology are truly safe.

What led you to start working in systems safety engineering?

I first became interested in systems safety engineering in 1997 when I started working on my Phd in the area of automated verification of safety-critical software. The area of systems safety engineering particularly appealed to me as it requires an interdisciplinary approach and a holistic understanding of the system and its context. This fits very well to my broad areas of interest and experience. As well as the engineering challenges associated with systems safety engineering, I also enjoy the fact that work in this area is driven by a clear purpose.

Based on your areas of research, what kind of companies are you interested in working with and why? 

I am always looking for opportunities to transfer the results of our research into real world systems and there are various ways in which this can take place. For example, we work with a number of large multinational companies to support their own internal research programs. This can be in the areas of automotive, industrial robotics, maritime and healthcare, amongst others.

We also support companies to build up their own competencies in systems safety and can provide hands-on support including tailored training schemes and worked examples. Startups often have great ideas for new technologies, and can profit from our support to achieve conformance to strict safety standards and regulations.

Lastly, we support the regulators themselves in understanding the impact of new technologies and developing the capabilities and approval mechanisms that would ensure that the general public are not exposed to undue risk.

What are the best parts about working in this area?

As I mentioned, knowing that our work can potentially save lives and increase the well-being of a large number of people is a great motivator. Apart from that I really enjoy the fact that I get to work with such a diversity of people from philosophers to medical practitioners, leading edge technologists and scientists as well as our fantastic team of safety experts in York. And of course, our work is never done, as there are always new technological innovations that need to be looked at through the lens of system safety in order for their true potential to be reached.  

What are the top three things you think organisations need to know about safe AI?

  1. We use AI to solve inherently complex tasks, using advanced technologies that are often not amenable to the types of rigorous analysis we have applied to safety-critical software systems in the past. This introduces a high degree of uncertainty into the safety assurance process.
  2. Following on from the first point - however, there are structured approaches to addressing these issues and concrete progress is being made. We therefore see a path to the gradual introduction of AI into high integrity systems. 
  3. Lastly, safe AI is not only about existential risk posed by runaway super-intelligent systems, often discussed amongst the more dystopian minded groups of researchers. Although, we should not underestimate this risk either. We also need to consider the many systems in which AI, and machine learning in particular, are being introduced that can have a direct tangible impact on the physical well being of their users or bystanders. If we can provide practical solutions to assuring the safety of AI functions embedded into physical systems we can also unlock their potential for providing a true benefit to humanity, whether it be improving patients' experience in our healthcare system or making the roads a safer place.

 

If you’d like to speak to Simon about opportunities for collaboration and projects around systems safety engineering and safe AI, please get in touch via our webform.