AUTONOMOUS CARS: HOW SAFE IS SAFE ENOUGH?

It is just a matter of time until most, if not all, of the traffic on our roads consists of driverless vehicles. The question is at what point will we consider autonomous technology safe enough to take the human out of the equation?

According to analysts, by 2030 up to 60 percent of auto sales could be self-driving vehicles. Taking human error out of the picture will make for safer roads, but given we will be putting our lives in the hands of computers – and computers are vulnerable to their own sorts of crashes and to hackers – how safe is safe enough?

The short answer is that, for us to accept them, autonomous cars will have to be much safer than manually-driven vehicles. Human error, however, is responsible for 90 percent of road accidents. With 1,235 people dying on Australian roads in the year ending in April 2017 (a rate of 5.1 people per 100,000), a 90 percent reduction in fatal accidents would have saved more than 1,100 lives in the past year alone.

A 2015 study into the effectiveness of low-speed autonomous emergency braking (AEB) found the technology reduced real-world rear-end crashes by 38 percent. The Australasian New Car Assessment Program (ANCAP) was so impressed that from next year it will refuse to grant new cars a five-star safety rating unless they have AEB.

Meanwhile, the Australasian College of Road Safety has criticised the government, arguing that hundreds of millions of dollars need to be put into infrastructure and testing for autonomous vehicles.

The revolution is upon us.

Are humans safe drivers?

Short answer: no. Humans are responsible for 90 percent of traffic accidents. On top of that, the USA’s AAA Foundation for Traffic Safety states that nearly 80 percent of drivers expressed significant anger, aggression or road rage behind the wheel at least once in the past year.

Alcohol-impaired fatalities made up 29 percent of total vehicle traffic fatalities in 2015, and of America’s roughly 35,000 annual traffic fatalities, around 10 percent of them (3,477 lives in 2015) are caused by distracted driving.

Remove human error from driving and you save a significant number of lives plus countless more injuries and property damage.

How safe do the machines need to be?

Last year, a Tesla test driver was killed while cruising on a Florida divided highway in Autopilot mode. Contrary to expectation, however, the investigation into the crash cleared the technology of fault.

The US National Highway Traffic Safety Administration (NHTSA) investigation reported that “a safety-related defect trend has not been identified at this time and further examination of this issue does not appear to be warranted”.

The report also pointed out that “the data show that the Tesla vehicle’s crash rate dropped by almost 40 percent after Autosteer installation”. In other words, semi-autonomous vehicles crash far less often than those driven by humans.

Regulators will have to come up with a definition of ‘safe’ – whether that means the machines must drive flawlessly or simply break fewer laws and get into fewer accidents than human drivers do. According to the RAND Corp think tank, a fleet of 100 cars would have to drive 275 million miles (440 million kilometres) without failure to meet the safety standards of today’s vehicles in terms of deaths. At the time of the fatal May 2016 crash, Tesla car owners had logged 130 million miles (208 million kilometres) in Autopilot mode”.

There are, as well, different levels of automation, ranging from assisting drivers with braking, parking and lane-changing (level 1’), which we already have, to full autonomy (‘level 5’), which is years away.

No single test can determine the safety of self-driving cars. The German government is leading the way by sponsoring research into the best way to ensure the safety of automated driving systems when confronted with the full range of traffic hazards.

Ninety-nine percent of US manufacturers have agreed to include AEB systems in all new cars by 2025, aiming to prevent 28,000 crashes and 12,000 injuries. The AEB program only affects rear-end crashes, but it is just one of many semi-autonomous features currently in development, all of which promise to make our roads safer.

Training the machines

Tesla boss, Elon Musk, tweeted in January that he was pushing a software update featuring Shadow mode to all Teslas with HW2 Autopilot capabilities. The car’s AI would ‘shadow’ its human drivers and compare decisions that it (the AI) would make to the decisions made by the human driver.

Hacks and crashes

Last year Chinese researchers remotely hacked into the Tesla Model S after discovering some ‘security vulnerabilities’. It was the first time anyone had remotely hacked into a Tesla, but almost certainly won’t be the last. It is an issue we need to be concerned about.
Computer crashes will happen less often – so infrequently as to be statistically insignificant.

Convincing the public

Safety is defined and no matter the statistics, humans are naturally wary of leaving their fates in the hands of a machine. To overcome this skepticism, the application of autonomous vehicles must be transparent, according to Brian Lathrop, senior manager of the Electronics Research Lab at Volkswagen Group of America.

That means letting people on the road know when a vehicle is in self-driving or driver-assist mode. Autonomous vehicles will also have let those in the cockpit know what they plan to do and give the person in the driver’s seat a chance to regain control if necessary.
Human drivers sharing the road with various levels of autonomous vehicles will concern many. The technology will have to earn trust – like a teenager on P-plates.

But the numbers don’t lie. More than a thousand Australian lives per year – and millions more worldwide – stand to be saved.

It is arguably only a matter of time before human drivers are banned from roads.

Leave a Reply

Your email address will not be published.