Cybersecurity: the key to autonomous driving

Cybersecurity: the key to autonomous driving

Mobility  |  Focus On

Cyber security: the key to autonomous driving

The security of IT components on board is a crucial issue in order to foster both development and diffusion of self-driving cars

What does it happen when the digital transformation meets individual mobility? The temptation is strong to talk about a revolution, as Tracy Chapman would say. In fact, it would be more precise to frame the ongoing change as an evolutionary process: cars and other passenger vehicles, on top of being increasingly cleaner and energy-efficient, are also becoming more digitized and interconnected. Most cars available in the market today rely on complex systems encompassing hardware, software, and bidirectional communications. And with the growing diffusion of self-driving cars, the car itself – not the driver – will take control of the trip as well as of the environment inside the vehicle. The global market for autonomous cars was still a niche worth 5.6 billion dollars in 2018 but is expected to dramatically expand to about 60 billion dollars in 2030 and beyond. Moreover, a large share of all cars are expected to have some level of autonomy before fully autonomous vehicles take over.
 

What raises skepticism

Like any other major technological progress, even the r/evolution of autonomous vehicles brings about opportunities as well as risks and uncertainties. The latter lie in the fact that, while a traditional car is an isolated system, an autonomous one is dependent on the data that it constantly collects and exchanges with other devices or entities, including central hubs, other cars, smartphones, and virtually any other interconnected device. Moreover, artificial intelligence and machine-learning techniques are employed to retrieve, store, and analyze data concerning the driver’s preferences and behaviors as well as any other source of information. 

The ability to gather, store, use, and share data stays, in many ways, at the heart of the current developments. However, it is also where risks and threats lie. In fact, this is also the main reason why people are somewhat skeptical of autonomous driving: an opinion poll found that only 16% of the Americans are “very likely” to ride as a passenger in an autonomous vehicle, while 28% are “not likely at all”. More than a third believes that self-driving cars are less safe than traditional, human-driven cars, and they would feel safer if they could take over control. The skepticism is not rooted in hard evidence from road data, at least as far as car accidents are concerned: autonomous vehicles seem safer than traditional ones. Rather, according to another survey, almost three out of four potential users (73%) fear that vehicle or system security might be compromised by hackers. It turns out, they have a point. 
 

What researchers have found

A review of 151 papers from 2008 to 2019 on attacks and defenses against autonomous vehicles found that malicious attacks – rather than mere malfunctionings – are already being attempted; the good news is that they are also countered effectively. However, they should be properly understood and anticipated in order to avoid not just adverse consequences, but also a crisis in the social acceptance of new technologies.

Attacks have been classified into three categories of autonomous control system, autonomous driving systems components, and vehicle-to-everything communications. By the same token, defenses fall in three categories: security architecture, intrusion detection, and anomaly detection. The analysis of attacks and defenses reveals at least two important takeaways. On one hand, attacks have intensified over time while becoming more subtle and sophisticated. On the other hand, defense techniques have also been developed, that rely on the use of big data and the improvement of the existing layers of protection, as well as the introduction of new ones. Moreover, a substantial amount of research has regarded the study of prevention strategies, aimed at the timely detection of risk factors, whether or not intentionally directed at endangering the safety or security of self-driving cars. Indeed, a growing concern is related to the theft of data and, therefore, the misuse – or at any rate illicit use – of personal information. 

To some extent, the risk of breaches into the systems, their components, or personal data is inherent in the wave towards digitalization since its very inception. But while these risks were relatively unexpected and were therefore overlooked in the past, there is now a much greater awareness – to the point that cybersecurity increasingly makes the headlines, either for the dangers from cyberattacks, or for the enhanced consciousness, or both. Car makers are investing substantially in dealing with such risks as it is very clear that their very existence, as well as its perception, raise substantial barriers to the adoption of autonomous vehicles. If people do not trust their functioning, no expected benefit will be large enough to create a favorable social and economic climate for their adoption. 
 

What regulators are focusing on

This is where regulation steps in. Of course manufacturers have an interest in improving the security and safety of their devices. But the more devices (including vehicles) become interconnected, the more cyber-aggressions gain social relevance, on top of their private cost to the persons involved. The EU Agency for Cybersecurity released a paper together with the Commission’s Joint Research Center that addresses this issue and raises a few policy recommendations, including the provision of regular security assessments of all AI components embedded in self-driving cars, the implementation of continuous risk assessments, and the adoption of a “security by design” (and possibly privacy by design) approach by the car-makers.

Luckily enough, this is also in the industry’s self interest, not just to create and maintain trust in their products, but also for the sake of liability in the case of cyberattacks. Which calls into question the last, and possibly more obscure at the time being, question: who is liable for what? In the old world, a producer was only liable for technical deficiencies, but not for the driving or other decisions of the driver. But as cars become interconnected and take more and more decisions autonomously, then the producer’s responsibilities expand accordingly. Providing adequate means to prevent, detect, and neutralize cyberattacks shall not be a plus or a bonus service, but a very inherent feature of self-driving cars. 


Carlo Stagnaro - He is research and studies director of the Bruno Leoni Institute. Previously he was Head of the Minister's Technical Secretariat at the Ministry of Economic Development. He graduated in Environmental and Territorial Engineering from the University of Genoa and a PhD in Economics, Markets, Institutions from IMT Alti Studi - Lucca. He is part of the editorial staff of the magazines Energia and Aspenia and is a member of the Academic Advisory Council of the Institute of Economic Affairs. He is an economic columnist for the newspapers Il Foglio and Il Secolo XIX. His latest book, co-written with Alberto Saravalle, is "Moltre riforme per nulla" (Marsilio, 2022).

More like this