This demonstrator project brings together work at the UKRI’s TAS Hub,  research at the Alan Turing Institute, particularly to expand and develop the work on their methodology and platform for building assurance cases for ethical and trustworthy algorithmic and autonomous systems, and AAIP. It also complements current work within the AAIP on ethical assurance, and expertise in argument-based assurance to refine the methodology and platform within different contexts including digital healthcare and autonomous mobile robotics.

The challenge

Like safety assurance, trustworthy assurance cases can be developed to document and communicate justifiable and structured arguments about relevant projects and systems. However, trustworthy assurance is designed to consider wider ethical goals, such as sustainability, accountability, fairness, and explainability, while also incorporating vital aspects of operationalisable technical standards (e.g. AI risk management), stakeholder engagement, and public participation.

The research

We are researching how the Goal Structuring Notation (GSN) approaches, a principles-based ethical assurance argument pattern and the Assurance of ML for use in Autonomous Systems (AMLAS) —currently being researched and developed at the AAIP—could provide a more robust foundation for the methodology, while also demonstrating its practical value for developers and regulators. This allows us to expand the methodology to areas beyond digital health and mobile robotics, while keeping these domains as important anchoring points and illustrative case studies.

The results

The Trustworthy and Ethical Assurance of Digital Healthcare project formally began in April 2023. A series of engagement and co-design events with regulators and policy-makers, developers, and researchers are currently being designed and organised, and expected to run in early Autumn 2023. However, the work has already been promoted by the UK Government and used as a case study on the development of structured assurance arguments.