
Safe AI and autonomous systems guidance
Here you will find our freely available safety guidance and frameworks for autonomous and AI-enabled autonomous systems.
Confidence in technology, and being able to prove that AS and AI systems are safe, is critical to the success of AI and autonomy in society and to ensure that such systems benefit all.
All our guidance is freely published and available for any organisation to use.
Developed in partnership with industry
Our guidance and methodologies are practical and usable because they’ve been developed in consultation with our industry partners. This means our guidance has been tested by those working at the forefront of developing AI and autonomous systems to ensure it meets the needs of those working in this space.
Contact us
Centre for Assuring Autonomy
assuring-autonomy
+44 (0)1904 325345
Institute for Safe Autonomy, University of York, York, YO10 5DD
LinkedIn
Safe AI and autonomous systems methodologies
In addition to being developed in partnership with industry, our ready-to-use methodologies have been validated and peer-reviewed. Both AMLAS and SACE are being actively used and recommended by organisations across multiple domains, regulators and Government to produce safety assurance cases for both AI and AS. You can also find examples of using both AMLAS and SACE in our Body of Knowledge.
Access and download Assurance of Machine Learning for use in Autonomous Systems (AMLAS) and Safety Assurance of autonomous systems in Complex Environments (SACE) below.

The BIG Argument
Our BIG Argument is sets out a way to bring the entire safety case argument together in one comprehensive approach. It brings together our existing methodologies and includes the development of frontier AI technologies
- Learn more about our BIG Argument for AI Safety cases
- Download the BIG Argument paper

SACE
SACE is free-to-use methodology which enables the creation of a safety case for overall system-level assurance activities. You can use and access SACE in the following ways:
New to SACE? Learn more about how we developed it or read our factsheet

AMLAS
AMLAS is free-to-use methodology which enables the creation of a safety case for machine learning component. There are multiple ways to access AMLAS depending on what you need:
- An AMLAS downloadable PDF,
- An interactive version of AMLAS
New to AMLAS? Learn more about how we developed it or read our factsheet.

PRAISE
PRAISE is our ethical framework to structure a principles-based ethics assurance argument which engineers, developers, operators, or regulators can use to justify, communicate, or challenge a claim about the overall ethical acceptability of the use of AI in a given socio-technical context.
- Learn more about PRAISE
- Download our paper on PRAISE
- View our PRAISE case study
Contact us
Centre for Assuring Autonomy
assuring-autonomy
+44 (0)1904 325345
Institute for Safe Autonomy, University of York, York, YO10 5DD
LinkedIn