Assurance of Machine Learning for use in Autonomous Systems (AMLAS)

AMLAS is the world's first methodology which supports the safer deployment of autonomous technologies which have a machine learning components.

What is AMLAS?

AI-enabled systems are becoming increasingly commonplace in almost all sectors. To enable their safe use, industries need credible and compelling safety cases. In six stages, AMLAS incorporates a set of safety case patterns and a process for systematically integrating safety assurance into the development of ML components.

Contact us

Centre for Assuring Autonomy

assuring-autonomy@york.ac.uk
+44 (0)1904 325345
Institute for Safe Autonomy, University of York, Deramore Lane, York YO10 5GH

Why use AMLAS?

The Centre for Assuring Autonomy is leading the way in this with AMLAS. Developed in partnership with industry, peer-reviewed and validated, AMLAS is a robust process which enables the creation of an explicit safety case for the use of ML within autonomous systems. 

Our guidance:

  • Is a structured process which generates a compelling and detailed safety case.
  • Provides assurance that an autonomous system with machine learning will perform safely and as expected.
  • Is a practical framework that can be used simultaneously.

Since launching in 2022 AMLAS has been used by industries across the globe. From healthcare to transport AMLAS has been embedded in safety processes, referenced in new standards and recommended by safety engineers.

Read our papers detailing example use cases using AMLAS:

Creating a safety assurance case for a machine learned satellite-based wildfire detection and alert system 

Ergo, SMIRK is safe: a safety case for a machine learning component in a pedestrian automatic emergency brake system

Assuring the safety of AI-based clinical decision support systems: a case study of the AI Clinician for sepsis treatment 

Who should use AMLAS?

Safety engineers

AMLAS enables safety engineers to determine the specific safety consideration for the machine learning component, evaluate its impact within the autonomous system and use this to develop a safety case.

Machine learning developers

AMLAS assists in the development of machine learning components which satisfy safety requirements and generates the artefacts required to support the safety case. 

Board level decision makers

Through the creation of robust safety cases, AMLAS provides both assurance and confidence around the deployment and use of autonomous systems with machine learning (also called AI or AI-enabled) to senior management teams and board members. 

As a result of working with AAIP and using AMLAS, we have
been able to understand the process of deploying and
assuring the use of machine learning components in small
satellite missions. We have applied AMLAS to a simulated
wildfire monitoring mission in the first instance and are now
leveraging it in ongoing work on other autonomous space
technologies.

Murray Ireland, Head of Autonomous Systems, Craft Prospect

Use AMLAS in Microsoft Visio

In order to help organisations integrate AMLAS into their processes we created a prototype Microsoft Visio tool for AMLAS. You can use the tool to:

  • systematically guide you through the AMLAS activities enabling you to add artifacts as they are created
  • automatically create a safety case for the ML component as you work through AMLAS
  • keep track of your progress and see how much you have left to do
  • share the status of your ML safety case with other people.

Download AMLAS for Microsoft Visio: AMLAS Tool v1 May 2022 (zip , 2,985kb)

You can also download the AMLAS Tool user guide v1 May 2022 (PDF , 1,344kb) or watch our guidance video:

Contact us

Centre for Assuring Autonomy

assuring-autonomy@york.ac.uk
+44 (0)1904 325345
Institute for Safe Autonomy, University of York, Deramore Lane, York YO10 5GH