Critical Barriers to Assurance and Regulation (C-BARs)

A Critical Barrier to Assurance and Regulation (C-BAR) is a problem that must be solved for a particular system or domain, in order to avoid one or more of the following risks:

  • a safe system cannot be deployed (losing the benefit of the technology)
  • an unsafe system is deployed (lack of clear evidence to assure operation)
  • the adoption of safe technology is slow
  • there is a lack of progress in adoption in a particular domain
  • the level of accidents and incidents leads to a backlash

Where RAS adapt their behaviour in operation, e.g. through machine learning, how can it be assured that they are/continue to be safe, and what regulatory frameworks are necessary to enable risk to be accepted?

Where RAS can operate safely within known bounds, e.g. of visibility or adhesion on a road, how are these limits identified in design and in operation, and a safe transition achieved before reaching the limits, and assured?

Where a RAS (capability) is known to be effective in one domain, e.g. sense and avoid for mobile robots in a factory, how can it be assessed for adequacy in another environment, e.g. in a hospital? (NB: this has similarities with the problem of safe reuse)

What decisions made by a RAS, e.g. object classification and path planning, need to be explained to users or regulators (as part of an acceptance or regulatory process), and how can this be done effectively given that the systems will make many decisions a second, in operation?

If (semi-) autonomous systems have to hand (back) control to a human operator, how can it be ensured and assured that the operator has sufficient situational awareness to be able to take over control safely and effectively?

Where humans and RAS unavoidably interact physically, and the RAS is sufficiently powerful/capable to cause harm, how can it be ensured and assured that the RAS does not injure humans it interacts with?

If a potentially harmful incident or an accident occurs, what information needs to be provided to support investigation, and how is this achieved and enforced in regulatory frameworks, noting that it may require gathering information from systems not directly involved in the incident or accident?

Where operators are required to monitor RAS to ensure that the system is operating as expected and/or safely, how can it be ensured and assured that they retain sufficient levels of attention and concentration, or what bounds can be put on the monitoring function to ensure that it will be undertaken effectively?

As RAS are likely to significantly modify the risk-benefit balance across many domains, and the effects of autonomy (especially some aspects of AI) exacerbate the intrinsic uncertainty in assessing risk, how can risk be estimated, communicated and accepted by both the regulatory community and the public?

As many RAS cannot be tested in real operational environments prior to their use, how can simulation be used to greatest effect to enable assurance and regulation, and when does simulation provide sufficient evidence (in itself or in combination with other means of verification and validation (V&V)) to allow controlled use of the RAS?

Where RAS are elements of systems of systems (either with other RAS or ‘manually’ controlled systems) that are known to be ‘individually safe’ how can safe interaction be assured, in their intended operational environment?

When RAS use machine learning how can it be shown that the training sets (and test sets) give enough coverage of the environment to provide sufficient evidence (in itself or in combination with other means of V&V) to allow controlled use of the RAS?

How can we identify effective means of validating RAS especially their AI components, e.g. using simulation, hazard analysis, etc., and are there effective coverage measures of the environment to allow controlled use of the RAS?

How can we identify effective means of verifying RAS especially their AI components, e.g. using testing, formal verification, etc., and are there effective coverage measures of the learnt decision space to allow controlled use of the RAS?

Please note that the way ‘critical barriers’ are defined above assumes that, in some cases, RAS are not approved for operation because of lack of ‘solutions’ for the barriers and, in other cases, systems are approved because although there is no agreed way of assessing the systems, there are no grounds for rejecting them within the regulatory framework.