For the first time, a pedestrian was killed in an incident associated with self-driving technology. The fatal occurrence happened in Tempe, Arizona, on March 18 at around 10 p.m., and involved a car operated by Uber. According to a report, at the time it hit the pedestrian, the Uber vehicle was in autonomous mode, with a backup human driver behind the wheel but no passengers, and was moving at about 40 mph in a 45 mph zone. Uber’s forward-facing video recorder reportedly showed a woman who was walking her bike and who moved suddenly in front of the car. “It’s very clear it would have been difficult to avoid this collision in any kind of mode,” said the Tempe police chief.
This isn’t the first time that a fatality has resulted from an incident involving an autonomous vehicle. In 2016, the driver of a Tesla Model S on Autopilot mode became the first person to die in a self-driving car incident, when the vehicle crashed into an 18-wheeler. Investigators later determined, though, that the car’s driver assist feature was not at fault.
Fortunately, not all incidents involving autonomous vehicles lead to loss of human lives. Still, the fact remains that such incidents are becoming more common, especially as driverless car testing gets implemented in more locations. California’s Department of Motor Vehicles, for example, notes that there have been more than 50 reports of autonomous vehicle collision since 2014.
But what causes autonomous vehicles to fail, outside of human error and unpredictable human behavior? According to data shared by Google, common technology failures include communications breakdowns, strange sensor readings, and problems with steering, braking, or other safety-critical systems.
Beyond instances of inadvertent technical malfunction, however, there’s also the risk posed by attacks perpetrated by malicious actors.
Attacks that can affect autonomous vehicles
According to a paper published by Yoshiyasu Takefuji, a member of the Faculty of the Environment and Information Studies at Keio University in Fujisawa, Japan, there are two types of attacks that could affect autonomous vehicles. These are vehicle sensor attacks and vehicle access attacks.
Vehicle sensor attacks, as the term suggests, involve key sensors on autonomous vehicles. GPS, millimeter wave (MMW) radars, light detection and ranging (LiDAR) sensors, and ultrasonic sensors are particularly vulnerable to jamming and spoofing. Camera sensors can also be blinded by something as simple as a laser pointer. These attacks can disable or otherwise affect connected car controls and can potentially lead to crashes and other safety concerns.
As for vehicle access attacks, Takefuji notes that they affect conventional vehicles as well as autonomous ones. These attacks may involve a key fob clone technique, where vulnerabilities in keyless entry systems could allow an adversary to eavesdrop on a signal sent by a remote control and gain unauthorized access to a vehicle. Or they could take the form of telematics service attacks, which can affect safety-critical systems when network access is compromised.
As connected cars become increasingly dependent on interconnected modules and systems, the attack surface for these vehicles continuously widens. This, in turn, makes them more attractive targets for cybercriminals. Security measures for connected vehicles, therefore, should be implemented as early as the design phase. In particular, solutions that incorporate up-to-date threats intelligence, risk assessment and system protection for critical modules, and real-time security visibility for threat prevention and mitigation can go a long way toward securing connected cars. In the long run, these can help pave the path for safer driving environments not only for people in autonomous vehicles but also for pedestrians.