go to top
ADVERTISEMENT: GIO

The unintended consequences of driverless cars

consequences

Self-driving cars could be a life-changing and life-saving technology. But they will have to be 100% safe, very cautious and even dull. Like all new tech, self-driving cars will also have some unintended consequences.

When is a self-driving car a self-driving car?

Even the language we use to describe a car changes the way we approach it. If we think a car can drive itself, we expect it to drive itself.

Researchers with the AAA Foundation in the US found the name they gave to the same partially automated system really changed perception of driving. It was called either AutonoDrive or DriveAssist and participants received training for one of them.

Training in AutonoDrive emphasised its capabilities and conveniences to the driver while DriveAssist training emphasised its limitations and driver responsibility. After driving the vehicle, the AutonoDrive group was far more likely than the DriveAssist group to trust their vehicle to take action:

  • 42% of AutonoDrive (4% of DriveAssist) group said the car could avoid a collision
  • 56% of AutonoDrive (27% of Drive Assist) group) said the car could reduce speed on a tight curve
  • 42% of AutonoDrive (11% of DriveAssist) said the name made the system sound more capable than it is.  

Unfortunately, drivers in the AutonoDrive group were also more willing to do distracting or risky things, such as talking on the phone. So any suggestion a car can drive itself causes people to become more passive.

Safe means boring

When people don’t have to do the driving, such as being a passenger on a long trip, most experience “passive fatigue”. It simply means you become fatigued by doing nothing. The occupant of a self-driving car changes from being a doer to being merely a monitor and that is not something humans do well.

Around 6-12% of passengers will even get motion sickness, particularly if they want to read or watch a movie!

In normal driving, you take responsibility and know which systems are your responsibility and which ones take care of themselves. In short, drivers know where they stand. In a partially or fully self-driving car, they don’t know where to draw the line. It may not be safe for the vehicle to suddenly ask them to drive.

The problem is these systems are simply not as smart as humans because of “edge cases” (unusual circumstances).

Not as smart as humans

As one AI researcher said, self-driving cars will not be viable in the short term because of the massive number of edge cases. Humans are brilliant at driving in bad weather, heavy traffic, in foreign cities, or on roads with poor markings or signage.

Literally, the machine has to learn about every single one of these circumstances and adapt to them. It must classify a pedestrian, lane marking or traffic light and then decide how to act. Currently, there is no widely accepted basis for ensuring that machine learning algorithms used in these cars are always safe.

One unintended consequence is the need for software updates. How can we be sure that every update is as safe as the previous version? There must be a way to prove every update is safe. There are no standards or regulations for a fully autonomous system because all current standards assume a human driver is present.

They cost too much even for fleets

Of course, a fully self-driving car with lasers and computers on board will be extremely expensive. Many people argue they are best run by fleet owners, rather than by individuals. Even so, fleet owners would have to continually calibrate and maintain this high technology. This would not only be expensive, but a huge responsibility.

Running a fleet of self-driving cars is not the same as running commercial planes. Planes use autopilot software, which is relatively safe because it does not rely on machine-learning algorithms. In fact, many engineers have asked how self-driving systems based on machine learning can be rigorously screened to be always 100% safe.

Another unintended consequence is hacking. All areas of computing have suffered data hacks and this will happen in self-driving cars. Fleet owners will have to deal with the possibilities and consequences of hacking.

Insurance will have to change

There is no doubt insurance will evolve to accommodate vehicles with no driver. There could be a number of consequences. For example, if self-driving cars are genuinely safer than those driven by humans:

  • Humans could be discouraged from driving at all and there will be no need to insure them at all, or
  • it will cost too much to insure humans.

If only fleets own self-driving cars then they may provide their own insurance. If cars are partially automated and:

  • People don’t fully understand where their responsibility lies, there could be more accidents
  • People adjust to accident-avoidance technologies, there could be fewer accidents where someone is at fault but more where the car is at fault.

Currently, insurers don’t know the real risks of self-driving cars. Actuaries study the way humans drive, not self-driving cars. The engineers who make the vehicle have more knowledge of potential risk than insurers do. Each time an accident happens, they will fix the bug to make sure it does not happen again.

In the end, who will want to take full responsibility for the risks of self-driving cars?

Giving up freedom

One unintended consequence of driverless cars is people must give up their freedom to drive a vehicle. This may be one of the biggest consequences but hardly anyone mentions it. As Benjamin Franklin said: “Those who would give up essential liberty, to purchase a little temporary safety, deserve neither liberty nor safety”.

author image

Corrina Baird

Writer and Researcher, greenslips.com.au

Corrina used to lend her car to her kids and discovered what Ls, Ps and demerits mean for greenslips. After 20 years in financial services and over 8 years with greenslips.com.au, she’s an expert in the NSW CTP scheme. Read more about Corrina

your opinion matters: