Dr Mauro Vallati

Artificial Intelligence

...looks at the obstacles facing the growth in driverless vehicles following the recent crash of a Tesla vehicle.

“Here we are again, yet another “surprising” Tesla crash.  What is surprising, at least to me, is that Tesla crashes are still making the news.  First, there are remarkably few accidents involving the Tesla autopilot.  And this is particularly true when compared to vehicles driven by humans.  Second, like any complex system, Tesla autopilot can fail.

That said, there is, of course, the issue of testing and validating autopilots or, more generally, autonomous driving systems.  In other words, the issue of making sure that the system is safe and ready to hit the road.  There is a huge difference between good old-style, fully mechanical cars, and cars with an autonomous driving component.  For traditional mechanical components, we know exactly how to stress them (by changing for instance temperature, pressure, etc.) and we have developed over the past decades an extensive range of tests that are used to make sure that the vehicle behaves as expected, up to a required standard.  A very good example being the crash test, where the vehicle structure is assessed with regards to its ability to protect passengers and/or pedestrians.

Autonomous driving systems can’t be tested in this traditional way.  Such driving systems are heavily based on machine learning visual recognition techniques.  They use a wide range of cameras and sensors to monitor and analyse the environment surrounding the vehicle to understand the conditions and decide what to do.  The decision is taken based on experience that has been gained via a training process.  Cars are driven for millions of miles, both in the real world and in simulations, to allow them to learn and recognise elements of the road environment (such as signals, lanes, sidewalks, etc.) and to understand how to drive in normal (and not so normal) traffic conditions.  For instance, the autopilot has to learn that the same element in different environments may need to be treated differently: a car suddenly breaking in front of you on a highway requires a reaction that is different from one breaking in a car park.  The autopilot should learn as much as possible, and this is indeed the aim of the training phase.  However, it is impossible to cover all the possible cases and conditions that can happen on the road, and here is where the big problem lies.  In previously unseen conditions, it is almost impossible to predict how the self-driving system will react.  Further, visual recognition systems are extremely sensitive to small changes, such as stickers on a signal, different light, reflections, weather, etc.  To make things more complicated, there is also no notion of ‘stress’, like mechanical systems, for techniques based on visual recognition; you can’t put an autopilot under stress, it will always respond to the sensed road conditions.  This makes testing of such systems extremely hard.  How can you guarantee that the autopilot will always react appropriately?  How can you make sure that it correctly recognises road signals or approaching vehicles?  Currently, there is a UK-funded project looking at this significant problem.  However, the issue is far from being solved.  Bear in mind that autopilots, due to their software nature, can be targeted by cyberattacks.  This is yet another dimension that needs to be covered by testing and is absent in purely mechanical vehicles.

It is easy to imagine a future where all vehicles on our roads will be fully autonomous.  Paradoxically, at that time it will be much easier for autopilots to drive, because a huge source of unpredictable behaviour and unexpected mistakes will be removed from the road: human drivers.  In such a future, roads will be a much more controlled environment, where autonomous driving systems can more easily evaluate the conditions and predict how things will evolve.  The most challenging time for autopilots is now, when we are learning their limits and capabilities, and humans are still on the road.  The solution that is implemented nowadays, i.e. requiring a human driver to be able to take back control at any time in case of issues, is not optimal.  Humans are easily distracted, particularly when they have nothing to do but looking at the road for hours.  Different solutions have to be found to make sure that we can progress our understanding of autopilots, and improve them in the process, while minimising accidents and crashes.”

Engineering

Browse all our blogs related to Engineering.

Computing

Browse all our blogs related to Computing.

Transport

Browse all our blogs related to Transport.

Business

Browse all our blogs related to Business.