Safe AI and autonomous systems guidance
Here you will find our freely available safety guidance and frameworks for autonomous and AI-enabled autonomous systems.
Confidence in technology, and being able to prove that AS and AI systems are safe, is critical to the success of AI and autonomy in society and to ensure that such systems benefit all.
All our guidance is freely published and available for any organisation to use.
Developed in partnership with industry
Our guidance and methodologies are practical and usable because they’ve been developed in consultation with our industry partners. This means our guidance has been tested by those working at the forefront of developing AI and autonomous systems to ensure it meets the needs of those working in this space.
Contact us
Centre for Assuring Autonomy
assuring-autonomy@york.ac.uk
+44 (0)1904 325345
Institute for Safe Autonomy, University of York, Deramore Lane, York YO10 5GH
Safe AI and autonomous systems methodologies
In addition to being developed in partnership with industry, our ready-to-use methodologies have been validated and peer-reviewed. Both AMLAS and SACE are being actively used and recommended by organisations across multiple domains, regulators and Government to produce safety assurance cases for both AI and AS. You can also find examples of using both AMLAS and SACE in our Body of Knowledge.
Access and download Assurance of Machine Learning for use in Autonomous Systems (AMLAS) and Safety Assurance of autonomous systems in Complex Environments (SACE) below.
Contact us
Centre for Assuring Autonomy
assuring-autonomy@york.ac.uk
+44 (0)1904 325345
Institute for Safe Autonomy, University of York, Deramore Lane, York YO10 5GH