A research study has demonstrated that commercial automotive lidar systems are vulnerable to spoof attacks, where lasers can be used to trick the sensors into seeing objects that are not there, or missing ones that are.
The study, carried out by the University of California, Irvine and Japan’s Keio University, could help inform the design and manufacture of future autonomous vehicle systems.
During the team’s investigation on nine commercially-available lidar systems, it was found that first-generation and even later generation versions exhibit safety deficiencies.
The results were presented recently at the Network and Distributed System Security Symposium in San Diego.
“This is to date the most extensive investigation of lidar vulnerabilities ever conducted,” said lead author Takami Sato, UC Irvine PhD candidate in computer science. “Through a combination of real-world testing and computer modelling, we were able to come up with 15 new findings to inform the design and manufacture of future autonomous vehicle systems.”
Tricked sensors cause unsafe vehicle behaviours
Testing first-generation lidar systems, the team perpetrated an attack identified as ‘fake object injection’ in which sensors are tricked into perceiving a pedestrian or the front of another car when nothing is there.
In this situation, the lidar system communicates the false hazard to the autonomous vehicle’s computer, triggering unsafe behaviour such as emergency braking.
“This chosen-pattern injection scenario works only on first-generation lidar systems; newer-generation versions employ timing randomisation and pulse fingerprinting to combat this line of attack,” said Sato.
But the UCI and Keio University researchers found another way to confuse next-generation lidar devices. Using a custom-designed laser and lens apparatus, the team members could conceal five existing cars from the lidar system’s sensors.
“The findings in this paper unveil unprecedentedly strong attack capabilities on lidar sensors, which can allow direct spoofing of fake cars and pedestrians and the vanishing of real cars in the AV’s eye. These can be used to directly trigger various unsafe AV driving behaviours such as emergency brakes and front collisions,” said senior co-author Qi Alfred Chen, UC Irvine assistant professor of computer science.