Body of Knowledge definitions
We are trying to capture here what is meant by these terms as used in the assurance objectives in the BoK. Where alternative definitions are required as part of guidance material in the BoK (for example if domain specific guidance uses the term ‘Hazard’ in a different way) these terms may be redefined for that purpose, but the standard definitions below should remain stable as the default throughout the BoK. These definitions should be used consistently throughout the AAIP.
An unintended event or sequence of events leading to harm.
A series of claims intended to establish the truth of a conclusion.
Justified confidence in a property.
An argument used to demonstrate assurance based upon the available evidence.
Arguments and evidence intended to demonstrate assurance.
A means of documenting and reusing assurance argument structures.
A specific source of epistemic uncertainty caused by a lack of knowledge or information.
An event or sequence of events through which a vulnerability may be exploited.
Having autonomy.
The capability to make decisions free from human control.
Able to operate independently of human control
Element that forms part of a system.
Fulfillment of requirements.
A specific way in which failure may occur.
Verification using mathematical methods.
A condition of a system that can develop into an accident through a sequence of normal events and actions.
Behaviour that may result in a hazard.
The product of the severity and probability of a hazard.
An event which significantly degrades safety margins, but does not lead to an accident.
Getting computers to learn from data in the form of observations and real-world interactions in order to create a model of the real-world.
Failure due to random events, most commonly resulting from physical causes, that can be characterised by statistical failure models.
A set of rules or directives.
An organisation that can make, maintain or enforce regulations.
A type of machine learning that allows computers to determine their required behaviour through exploration within a specific context, in order to maximise some notion of cumulative reward.
The risk that remains once all risk reduction measures have been taken.
The product of severity and probability.
A machine capable of carrying out a complex series of actions automatically.
The design, construction, operation, and use of robots.
The degree of freedom from hazard risk.
Justified confidence in safety.
An evidence-based justification of safety assurance.
Description of a property or behaviour required to ensure safety.
A model of a real-world situation on a computer.
Evaluation without operation.
A group of interacting or interrelated elements that form a unified whole.
Failure due to flaws in specification, design, manufacture, installation or maintenance.
Evaluation through operation.
The evaluation of the correctness of a specification.
The evaluation of compliance to a specification.
A weakness which can be exploited to perform an attack against assets.
Further discussion of ‘autonomy’
The Programme takes the view that the key difference between manually controlled and autonomous systems is that the RAS has decision-making capability and authority. This is what is meant by decisions free from human control. All software implements decisions in a sense, e.g. taking an else rather than a then branch. However, the intent is that the decisions are those that might otherwise have been taken by humans and that require intelligence, situational understanding and freedom, in the sense of individual autonomy, e.g. stopping at a red light, or categorising an object as a person rather than a lamp-post.
The notion of “taken by humans” is not sharply defined, and we might define some systems, e.g. a kettle which shuts-off when the water is boiling, as automatic not autonomous. In general, we would expect the term autonomy, rather than automatic, to be used where:
- there is an open environment, e.g. as in driving on the roads, as opposed to a closed environment which is well-defined and understood;
- the range of options in decision-making is very large and may not even be bounded;
- there is considerable uncertainty in assessing the situation and/or choosing a course of action (making a decision).
In practice, the BoK will provide guidance in a way which reflects the particular challenges, e.g. open vs closed environments, and will not be constrained by whether or not some RAS is viewed as automatic as opposed to autonomous. In many domains standards or other documents define levels of autonomy from full human control, via shared human-machine decision-making (or the possibility of handover from machine to human), up to “full autonomy”, consistent with the definition given above. The intent is that the definition is interpreted flexibly, and would include shared human-RAS decision-making, not just “full autonomy”. Dictionary definitions of autonomy use phrases like “freedom from influence and control”. We have deliberately excluded “influence” as we would expect RAS to be influenced by the operating environment, e.g. behaviours of other cars or pedestrians in autonomous driving, and behaviour of other ships in maritime autonomy.