A Driverless Death: What are the risks of driverless cars?
After a pedestrian was killed by an experimental driverless car, Angelique Carr talks to some experts about what some of the risks of AI cars might be.
Up until recently the biggest dangersurrounding the self-driving car was drivers of other cars, who would turn in their seats to take photos of a car with no one driving. Unfortunately, that is no longer the case. Last month a pedestrian was struck, and killed, by a self-driving Uber car in Arizona. While there was a driver behind the wheel at the time, Uber’s in-house software was in control.Neither reacted.
One of the big problems with driverless cars, according to Dr Alexandre Mendes the head deputy of UON’s NUbots, is that the people behind the wheels become complacent.
“My suspicion is that this driver has been doing this for a long time and everything […] was going well so far. Probably the first days that that person was behind the wheel he was paying attention to absolutely everything, ready to take control over from the artificial agent, but then after one day, two days, a week, a month of the car behaving correctly, you kind of get used to it and then you relax a little bit too much. And then that’s when something bad happens.”
The danger that comes with new technology has been one we’ve dealt with for ages in popular media, mostly in the form of sci-fi’s, such as iRobot, The Time Traveller or, more recently, Black Mirror.
Black Mirror has been declining in quality recently (with the exception of a few good episodes) because they’ve shifted from the playing with the idea that technology is ever-changing and our relationship to it is complex to reciting ‘technology = bad’. The real world is more nuanced than that. While driverless cars have unfortunately caused a death, the implementation of them nation-wide (once they are road ready) will save countless lives.
But, before we get there, there are ethical questions that we need to answer that were once only hypothetical. Take the trolley problem; You are on an out-of-control trolley and there are five people tied to the track. You can change the direction of the track, but, on the other branch, there is one person tied down. Do you let five people die, or actively choose to kill one person?
Dr Chris Falzon, ethics professor here at UON, says that “the usual justification [for driverless cars] is that they are going to cause fewer deaths and injuries, so we should have them, more or less on utilitarian grounds. The main problem so far seems to have been companies like Uber who are of course motivated by profit, trying to rush them into use before they are ready.
The typical idea seems to be that in cases where an accident can’t be avoided, an ethical car will be programmed to do whatever causes the least harm to people. The driverless cars are going to be little utilitarians.”
But, as Mendes points out, from a car manufacturer’s perspective, a car that might sacrifice its passengers for the greater good is not a car that people are going to buy.
Ultimately, the type of programming in cars will be adapted to the local laws of wherever the cars are driving. And before politicians and legislator allow the full-scale use of driverless cars they “will have to be orders of magnitude safer than humans.”
With any new technology that is useful, it will be integrated into our lives sooner than we think possible. Dr Mendes believes that we’ll have Tesla (and Tesla-styled) cars that can navigate the highway between Newcastle to Sydney with no driver intervention in only two to three years.
Feature image by Reid McManus