The US National Highway Traffic Safety Administration has begun a preliminary investigation into 25,000 Tesla Motors Model S cars after a fatal crash involving a vehicle using the ‘Autopilot’ mode.
The agency said the crash came in a 2015 Model S operating with automated driving systems engaged, and “calls for an examination of the design and performance of any driving aids in use at the time of the crash”.
It is the first step before the agency could seek to order a recall if it believed the vehicles were unsafe.
NHTSA said in a statement that the driver of the 2015 Model S was killed while operating in Autopilot mode in a crash on May 7 in Williston, Florida. NHTSA said preliminary reports indicate the vehicle crash occurred when a tractor-trailer made a left turn in front of the Tesla at an intersection.
Tesla said in a blog post that this is the first known fatality in just over 130 million miles where Autopilot was activated.
Tesla said, “Neither Autopilot nor the driver noticed the white side of the tractor trailer against a brightly lit sky, so the brake was not applied”.
The company said, “The high ride height of the trailer, combined with its positioning across the road and the extremely rare circumstances of the impact, caused the Model S to pass under the trailer, with the bottom of the trailer impacting the windshield of the Model S”.
“Autopilot is getting better all the time, but it is not perfect and still requires the driver to remain alert. Nonetheless, when used in conjunction with driver oversight the data is unequivocal that Autopilot reduces driver workload and results in a statistically significant improvement in safety when compared to purely manual driving,” Tesla said.
The ethical debate
The incident comes as lawmakers and car companies continue to wrestle with the ethical considerations surrounding the use of an Autopilot mode, including the pros and cons of whether to program a car to kill the occupant if it meant saving more lives “for the greater good”.
A recent study called ‘Autonomous vehicles need experimental ethics’ explored the matter.
“Some situations will require AVs to choose the lesser of two evils,” the study said. “For example, running over a pedestrian on the road or a passer-by on the side; or choosing whether to run over a group of pedestrians or to sacrifice the passenger by driving into a wall. It is a formidable challenge to define the algorithms that will guide AVs confronted with such moral dilemmas. In particular, these moral algorithms will need to accomplish three potentially incompatible objectives: being consistent, not causing public outrage and not discouraging buyers.”
The study didn’t attempt to resolve any ‘right’ or ‘wrong’ answers, but stressed that how companies answer the questions will impact the adoption of autonomous vehicles.
Would you buy an autonomous vehicle if you knew that it was pre-programmed to drive into a wall (for example) rather than hit a group of pedestrians, potentially killing you for ‘the greater good’?
From a legal standpoint, there is the question over who is responsible for collisions – the occupant of the vehicle or the manufacturer that installed the control algorithm that decided what to hit.
Will autonomous vehicles come with ‘Save Me, Kill Others’ as an optional extra?
This discussion will be ongoing for years to come.