AI for Good blog

How can we ensure safety and public trust​ in AI for automated and assisted driving?

Safety | Smart Mobility

Cars are becoming increasingly automated. Drivers already benefit from a wide range of advanced driver-assistance systems (ADAS), such as lane keeping, adaptive cruise control, collision warning, and blind spot warning, which are gradually becoming standard features on most vehicles.

Today’s automated systems are taking over an increasing amount of responsibility for the driving task. It is expected that soon, sensors will take the place of human impulse, and artificial intelligence (AI) will substitute for human intelligence.

This process is defined through various level steps, from low levels of automation where the driver retains overall control of the vehicle in level 1, to a fully-autonomous system in level 5.

10 years ago, manufacturers predicted many cars on today’s roads would be fully automated, but it still remains a distant future for the automotive industry. At the recent Future Networked Car Symposium 2020 at ITU Headquarters in Geneva, Switzerland, top experts joined a panel entitled ‘AI for autonomous and assisted driving – how to ensure safety and public trust’ to discuss the progress and the prospects for vehicles that drive themselves – and how we might achieve this future.

Updated predictions for automated vehicles

The panellists agreed that achieving fully autonomous systems, which expect and react to a vehicle’s performance to the same level as a human driver in every scenario – also known as ADAS Level 5 – is unlikely, certainly in the near future.

“There is no AI. AI is a buzzword! None of these systems are even close to passing a Turing Test. They are code, many of them are black boxes that have done some sort of regression to get coefficient to run things,” said Alain Kornhauser, Professor, Princeton University, USA.

“Did AI learn bad traffic habits? Did it break some rules because it was hacked?” — William Grouse, SAE International

Meanwhile, Bryn Balcombe, Chief Strategy Officer of Roborace, distinguished between the algorithms for driving decisions and the underlying hardware architecture. No vehicle is driving itself, rather it is the algorithm driving the vehicle, he pointed out.

“People thought there would be a Level 5. Now, there’s a lot of discussion about ‘There will never be level 5’. It is just too hard to do,” said William Gouse, Director, Federal Program Development, SAE International, Washington, DC. “It is not a linear step from level 4 to 5.”

He stressed the functional difference between AI performance in a simulated environment versus a real-world application as a defining barrier to safety and trust.

Critical concerns over security need to be answered first. “Did AI learn bad traffic habits? Did it break some rules because it was hacked?,” asked Gouse.

Validating autonomous driving

Ongoing validation of autonomous AI ‘drivers’ is a necessary step to addressing these concerns and ensuring the safety of all road users, said Balcombe.

“When we look at whether these systems are safe, how can we ensure that they perform the driving task as well – if not better – than a human, because that is the public expectation,” said Balcombe.

“It is not going to be acceptable to say [that the autonomous driving software] passed in simulator; ‘I’m sorry that your child ran out in the road, I wasn’t expecting that to happen – it wasn’t part of my scenario testing.’ That is something that cannot happen,” he said. “We have to have some mechanism to monitor the behaviour of these vehicles when they are on the road to keep that public trust.”

But it is not just the technology that needs to be monitored in an autonomous future. The panellists agreed that human misbehaviour, rather than human error, is one of the primary causes of road accidents.

“There are some risks regarding artificial intelligence, not because of the technology but because of the use. We are trying to evaluate and assess where the risk could be,” said Juan Jose Arriola Ballesteros, European Commission.

Ballesteros highlighted the importance of the new Focus Group on AI for autonomous and assisted driving, set up by ITU. He said that the European Union is working on a strategy based on principles of trust and excellence, and is currently developing a strategy introducing these new technologies on the roads.

Work is underway on a potential vehicle gateway requirement for Europe.

But given the hurdles faced by autonomous driving, panellists agreed that there may be a market for mobility as a service instead of for private use.

“I don’t think anybody is going to sell us or let us own a vehicle that we can just send out on public roads with nobody in it to go pick up our lunch. I don’t think we are responsible enough as individuals,” said Kornhauser. “There is a market for mobility as a service.”

This vision of mass transit mobility as a service would need to be done by a respected body that can distribute risk over an enormous number of entities, said Kornhauser. But how do you build public trust?

“If they are not safe, the mobility as a service piece will never happen,” he said.

Short-term goals: building trust through voice

With today’s level 2 and level 3 automation – in which some driver assistance systems are automated but the driver ultimately remains in control of the vehicle – interaction between the driver and the car relies on “beeps and icons,” said Nils Lenke, Senior Director, Innovation Management, Cerence Inc. This can lead to miscommunication between the ADAS systems and the driver, especially if the car is a short-term rental or a new purchase.

Speech, on the other hand, can convey information to the driver more easily and in a language that they understand. It could explain the manual and provide practical help to drivers – “would you like me to teach you about lane assist and how it works?” for example.

Not only does this human-to-vehicle interaction improve driver safety, the increased anthropomorphism drives trust, says Lenke.

“If a car ‘speaks’, if it has a name or personality, we project human-like qualities onto it and we believe it more, we trust it more,” Lenke said. “You trust the car will be a good driver and people are less likely to blame the OEM [Original Equipment Manufacturer] who manufactured the car in case of an incident.”

Speech can also facilitate better driver monitoring.

Lenke proposed bringing sensors into the car’s cabin to monitor driver behaviour, using speech to engage drivers in conversation to encourage better road behaviour, for example, to encourage a tired driver to pull over for a rest – an idea that moderator Roger Lanctot, Director of Automotive Connected Mobility at Strategy Analytics, called “a big change.”

This driver monitoring technology could be rolled out to all cars within the next 12 to 36 months, according to Lenke.

It seems then, that level 4 automation is firmly on the horizon.

The 2020 ITU-UNECE Symposium on the Future Networked Car was kindly supported by Gold sponsor DEKRA, Silver sponsor Qualcomm and Bronze sponsor RoadDB.

Are you sure you want to remove this speaker?