You would think from everything you read and hear that driverless cars are inevitable. Carmakers have already spent billions developing them. But lately their message about driverless cars is changing. It could be a decade or more before they can drive on the roads. Some believe a truly driverless car are a fantasy and may never even exist.
We see two huge problems for programming driverless cars. One, they may be safer than human drivers in the distant future but, until then, they are a lot more dangerous. Second, engineers want to program them to make decisions just like humans – yet they don’t want them to drive just like humans!
Driverless cars are also called self-driving cars or autonomous vehicles (AVs), but language is important. For example, a self-driving car may still have a driver and an AV may be only partially autonomous. These days, most experts mean levels 3 and 4 autonomy:
- Level 3 (conditional) – Vehicle drives but can ask for driver to intervene on highway
- Level 4 (high) – Vehicle drives even if driver fails to intervene, in a covered metro area.
Carmakers are backing away from earlier claims about producing fully driverless cars and how quickly. Glamorous sci-fi images were a fantasy: now they have to make cars that are safer than humans. Their biggest theme now is underestimation or overestimation on the part of industry.
Nissan and Ford say the industry overestimated autonomy. Ford CEO said: “We overestimated the arrival of autonomous vehicles. [Their] applications will be narrow, what we call geo-fenced, because the problem is so complex”. Mercedes-Benz says the industry underestimated the size of the task. It has adopted “sober realism” and production of more commercially viable driverless trucks. BMW claims it “never believed in the hype”.
All along, carmakers have underestimated the complexity of driving and what a good job most humans do. We have a remarkable ability to look at other humans and understand their behaviour; we effectively fill in gaps in information.
The problem with autonomous driving systems is they are brilliant at deductive reasoning, but not nearly so good at inductive reasoning.
Learning through inductive reasoning
Machines learn through deductive reasoning. For example, engineers show the machine a million photos of dogs so it can tell the difference between dog and not-dog. Inductive reasoning goes further: it must predict whether the dog will stay on the path or dart into the playground. This is far more challenging to learn.
In the same way, AV vision systems can only respond to what they have been shown before. They cannot work out what to do if they have never seen it happen. They still cannot identify whether objects are moving or stationary, nor can they handle uneven terrain, different types of road markings or bad weather.
Yet tech people say progress is inevitable and these problems will sort themselves out. We assume technology is the best and highest solution to a problem. One writer even calls this technochauvinism. We think technology will reduce the number of accidents, but we fail to consider that humans must program them! These humans have to consider every one of millions of different driving scenarios – and be ethical. Is that a fantasy?
One example of overestimation is the introduction of robot taxis.
Most of us will never afford driverless cars so we would have to share robot taxis. But there is a long way to go with this technology:
- They need humans as backup drivers to take over if things go wrong
- They cannot go anywhere – only along slower, easier routes or in carefully chosen areas
- Even if a robot car can “see”, it cannot interpret everything it sees.
Uber claimed last year robot taxis would take “a long time” and refused to name a date. Based on the Moores Law assumption that driverless car performance doubles every 16 months, robot taxis may not hit the roads until at least 2035 – another 15 years.
Next week we will explore this assumption and decide whether or not driverless cars are any more than a fantasy.