2.2.1.1. Defining sensing requirements
Practical guidance - automotive
Author: Dr Daniele Martini, University of Oxford
Operating scenarios for Autonomous Vehicles (AVs) comprehend both structured, human-built environments and the vast scope of natural scenery. Hazards span from many-participant traffic environments to treacherous surfaces and lighting and weather conditions. Defining the operating scenarios as a combination of both scene and possible threats, we advocate that different combinations will require different sensor payloads as specific sensing modalities provide robust performances under particular circumstances. Moreover, we argue for an extension of the most common sensor suite used in AVs to include unusual sensing modalities, which can be less affected by challenging environmental conditions [1].
A diverse dataset
A primary contribution of the SAX project [2] is the collection of a variegate dataset. It contains a broad combination of scenes -- urban, rural and off-road -- and hazards -- mixed driving surfaces, adverse weather conditions, and other actors’ presence. The main goal of this dataset is to show how specific sensors behave in particular scenarios. Our sensor suite extends the pool of sensors traditionally used for AVs with uncommonly-used sensors that show great promise in challenging scenarios. Among the first, we can list cameras, LiDARS, GPS/INS and automotive radar; we also collect data from a Frequency-Modulated Continuous-Wave (FMCW) scanning radar, audio from microphones in the wheel arches and all the internal states of the vehicle.
The dataset will give us a deeper understanding of the behaviour of the sensors under such different conditions, highlighting strengths and limitations. To do so, we accompany the data with ground-truth labels for various tasks – object detection/segmentation, drivable surface segmentation, odometry – to train and validate algorithms.
Figure 1. Vehicle Platform and Sensor Suite and the location in the UK for the collection sites.
FMCW scanning radar
Although radar has been a typical sensor modality for automotive applications for several decades, FMCW scanning radar has only been introduced to commercial uses in the past few years due to reductions in costs and dimensions. Radar can benefit safety greatly in scenarios in which traditionally exploited sensors like cameras and lasers will fail due to its inherent robustness to weather conditions and long sensing range.
We showed how AVs could utilise radar independently from other sensors for low-level autonomy tasks, ranging from odometry [3] and localisation [4] [5] [6] to scene understanding [7] [8] to path planning [9].
Figure 2. A radar scan (left) is labelled using camera and lidar data (centre) for training a semantic segmentation pipeline (right) [7].
Fig 3: Radar is used to understand the driveability of the scene (black and white), giving us representations through which the AV can plan its motion (red) [9].
Audio
Recent research showed how audio can be a suitable sensing modality for various tasks, either with or without fusing its information with other sensors [10]. We have studied how audio can benefit road-surface classification [8]. Audio has the advantage to be inherently invariant to the scene illumination, although it contains only very punctual information – i.e. in the contact point between the wheel and the ground. For this reason, we coupled the audio data with odometry estimation to build an automatic annotation tool to teach a radar segmentation network to distinguish road surfaces with the advantage of an extended field of view.
Figure 4. Audio data is used to label a radar scan (left) for training a semantic segmentation pipeline (right) to distinguish between road surfaces [8].
CAN
The dataset also contains data recorded from the CAN bus of the vehicle. The variables recorded span from the steering wheel angle to the rotational speed of each wheel to the engaged gear. Such variables contain critical information for several tasks, which can treat them either as sensory data – e.g. for driver identification [11] – or as control signals – e.g. for training behavioural-cloning algorithms [12].
External services
We challenge the definition of a sensor by including services provided by external operators, particularly satellite imagery and weather forecasts. Although these are, in practice, sub products of GPS queries, the information they contain can be valuable for tasks such as localisation [13] [14] or route planning.
On this matter, we explored how services like Google maps can serve as readily available maps of never-seen places where an AV can localise using range sensors, like LiDAR and radar. We showed that deep learning approaches could overcome the domain difference between the external service and the sensor stream and achieve accurate displacement estimation between the overhead image and the AV in an urban environment.
Figure 5: Satellite images used as a map for radar (top) and LiDAR (bottom) sensors. The satellite image is converted into a synthetic sample of the sensor domain to estimate the translational and rotational offset between the two [13].
Figure 6: A satellite image is converted into a point cloud by estimating an occupancy map for offset estimation with a LiDAR scan [14].
Summary
In summary, we present a universal view of AV sensing requirements and how uncommon sensing modalities can be suitable for overcoming challenging operational scenarios. Ideally, we would like our vehicles to be deployable and performant in any situation. The sensing capability of the AV plays a critical role, and to evaluate the suitability of specific sensors in specific scenarios, we collected a dataset with broad combinations of environments and weather conditions. Alongside the sensing data, we provide labels for various tasks, which can be used for training and evaluation purposes.
References
[1] Liu, Oi, et al. "A Survey on Sensor Technologies for Unmanned Ground Vehicles." 2020 3rd International Conference on Unmanned Systems (ICUS). IEEE, 2020.
[2] Gadd, Matthew, et al. "Sense–Assess–eXplain (SAX): Building Trust in Autonomous Vehicles in Challenging Real-World Driving Scenarios." 2020 IEEE Intelligent Vehicles Symposium (IV). IEEE, 2020.
[3] R. Aldera, et al. “What Could Go Wrong? Introspective Radar Odometry in Challenging Environments,” in IEEE Intelligent Transportation Systems (ITSC) Conference, Auckland, New Zealand, 2019.
[4] S. Saftescu, et al. “Kidnapped Radar: Topological Radar Localisation using Rotationally-Invariant Metric Learning,” in Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Paris, 2020.
[5] De Martini, Daniele, et al. "kRadar++: Coarse-to-Fine FMCW Scanning Radar Localisation." Sensors 20.21 (2020): 6002.
[6] M. Gadd, et al. “Contrastive Learning for Unsupervised Radar Place Recognition,” in IEEE International Conference on Advanced Robotics (ICAR), (Ljubljana, Slovenia), December 2021
[7] Kaul, Prannay, et al. "Rss-net: Weakly-supervised multi-class semantic segmentation with FMCW radar." 2020 IEEE Intelligent Vehicles Symposium (IV). IEEE, 2020.
[8] Williams, David, et al. "Keep off the grass: Permissible driving routes from radar with weak audio supervision." 2020 IEEE 23rd International Conference on Intelligent Transportation Systems (ITSC). IEEE, 2020.
[9] M. Broome, et al. “On the Road: Route Proposal from Radar Self-Supervised by Fuzzy LiDAR Traversability,” AI, vol. 1, no. 4, pp. 558–58, 2020.
[10] Marchegiani, Letizia, and Xenofon Fafoutis. "How Well Can Driverless Vehicles Hear? A Gentle Introduction to Auditory Perception for Autonomous and Smart Vehicles." IEEE Intelligent Transportation Systems Magazine (2021).
[11] Remeli, Mina, et al. "Automatic driver identification from in-vehicle network logs." 2019 IEEE Intelligent Transportation Systems Conference (ITSC). IEEE, 2019.
[12] Azam, Shoaib, et al. "N²C: Neural Network Controller Design Using Behavioral Cloning." IEEE Transactions on Intelligent Transportation Systems (2021).
[13] Tang, Tim Y., et al. "Self-supervised learning for using overhead imagery as maps in outdoor range sensor localization." The International Journal of Robotics Research (2021): 02783649211045736.
[14] Tang, Tim Y., Daniele De Martini, and Paul Newman. "Get to the Point: Learning Lidar Place Recognition and Metric Localisation Using Overhead Imagery."
Related links
Download this guidance as a PDF:
Related links
Download this guidance as a PDF: