Blog post: Driver assist - can it be dangerous to be too good?
Do advanced driver assistance systems (ADAS) that perform too well become dangerous by giving us a false sense of security?
The recent article in ‘What Car?’ magazine, reporting on the #TestingAutomation study at Thatcham Research, presents interesting differences in the performance of advanced driver assistance systems (ADAS) currently in use on cars. One issue this seems to highlight is the danger of ADAS becoming more relied upon by the driver than they were actually designed to be. This raises an interesting question of whether ADAS that perform too well can become dangerous by giving us a false sense of security.
There are parallels here with the well documented problems with other so-called “advisory” systems. Over time, when a system is seen to work well, there is the potential for people to stop treating it as advisory, and instead to start to rely on it. This means people start to use the system in a manner for which it wasn’t designed, and for which it may not be safe.
ADAS could fall into these same problems. As people get used to the systems working, they begin to trust them, and then to rely on them. This can result in altered driver behaviour; you don’t need to pay so much attention to the road if you trust the car to take care of lane keeping and emergency braking on your behalf.
But what if these systems were to let you down under certain conditions? Would you be in a position to take back control and intervene? The safety case for many ADAS hinges on the assumption that the driver is aware of what is happening and has the ability to successfully resume control of the car if required. Neither of these assumptions is necessarily true for car drivers.
This approach and assumption is similar to that used on aircraft when in autopilot; however a highly trained pilot is very different from the average car driver. The ‘What Car?’ article suggests some drivers would even be tempted to take a nap whilst using adaptive cruise control. This would clearly mean they are not in a position to be aware of what is going on, or to take back control of the vehicle if needed.
This problem has been highlighted with incidents such as the 2016 crash in Florida where a Tesla car suffered a fatal collision whilst driving with the autopilot feature engaged. The car failed to identify a white truck against white sky, and the driver did not respond in order to prevent the collision.
For this reason some, such as Waymo, suggest that developing fully autonomous driving is a safer approach than a “partial autonomy” solution that places reliance upon the driver. This is a difficult call to make, since if they work correctly, ADAS can undoubtedly make cars safer.
Others are attempting to mitigate the problem through monitoring driver awareness using steering wheel sensors or inward-facing cameras that track driver eye movement and alert the driver if they fail to pay sufficient attention.
As work continues to develop safe and assured technologies for cars, what remains clear is that we, as drivers, must be careful to understand the limitations of ADAS systems and to use them accordingly. There remains, however, a responsibility on car manufacturers not to over-sell the capabilities of their ADAS functions; they must be honest about current technology, whatever the future may hold.
Dr Richard Hawkins
Senior Research Associate
Assuring Autonomy International Programme