Assuring safety and social credibility
How do we achieve social credibility of assistive robots in the home?
This project characterised a link between social credibility and effective performance of safety-related behaviours. The team demonstrated the presence of this link in an experimental domestic environment setting, showing that an assistive robot which does not perform adequate social behaviours is also less effective at performing safety-related behaviours in the
home.
The challenge
In order to be accepted by end-users, assistive robots in a domestic environment must demonstrate behaviour which is empathic and socially interactive, thereby achieving a certain minimum degree of social credibility. These assistive robots must also perform functions important to safety.
Both the safety-critical and socially important behaviours of an assistive robot rely on the user's engagement with the robot. A loss of social credibility (from any cause) can lead to an end-user disengaging with the robot, choosing either to ignore its prompts or to switch it off. User disengagement compromises the ability of these robots to perform their safety-critical functions.
How can potentially conflicting social and safety requirements be balanced, and how can we assure that the RAS is both safe and acceptable to end users?
The research
This small feasibility project was split into two sections of work - introductory work and experimental work.
The introductory work (Menon, 2019) identified that the social effects of assistive robots are not typically factored into hazard analysis, and equally, that there is often very little consideration of the ways in which the social performance of an assistive robot are affected by safety features (e.g. automatic stops, avoidance of physical contact). It suggested potential methods to address the loss of safety-critical functionality resulting from lowered social credibility.
The experimental work was developed to validate the hypothesised link between social credibility and safety. The team conducted a preliminary study with 30 participants that investigated their responses when notified of different hazards by either a socially credible robot (AN) or a robot that explicitly violated social norms (VN).
Participants were asked to sit at a table and complete as many cognitive tasks (such as Soduku puzzles) as possible during an allotted time. They were told that a robot may interrupt them during the task and it was their choice whether or not to perform an action in response to the interruption.
The team observed participants via camera feeds and smart sensors and participants were also asked to complete a questionnaire after the experiment to ascertain their impression of the robot's behaviours.
The results
This was a preliminary study so no statistical significance between conditions was expected. However, the team were able to identify a number of trends from the collected data. These trends provide some indication of how safety assurance might be affected by an autonomous system's social behaviours in this domain. The most notable impact is on a user's willingness to accept the robot's assessment of hazards and the extent to which the user considers it necessary to cross-check these against their own experience. The results indicate that when it comes to assessment of safety-critical situations, users are more likely to believe a robot that they consider socially intelligent instead of one lacking social competency.
-
1.2.1 Considering human/machine interactions
- Menon, C., and Holthaus, P. Does a Loss of Social Credibility Impact Robot Safety? Balancing Social and Safety Behaviours of Assistive Robots, in proceedings of the 9th International Conference on Performance, Safety and Robustness in Complex Systems, pp. 18 – 25, 2019) was awarded a Best Paper award at PESARO19
- Holthaus, P., Menon, C. & Amirabdollahian, A. How A Robot's Social Credibility Affects Safety Performance, in Proceedings of the 11th International Conference on Social Robotics, 2019