Passenger or pedestrian - who should the car save?
Both technology giants and traditional car companies are well on their way to producing road-ready autonomous cars.
Many say that these driverless vehicles will ease up traffic congestion and cut road accidents - in some cases by as much as 90%.
But questions over safety and decision making linger on - and new research has shed light on the moral dilemma of a fatal accident.
Academics in the US and France have carried out a survey and found people are in favour of driverless cars minimising casualties, even if it means killing the driver. Essentially, they want the car to choose the greater good.
Google's driverless car out on the roads
Unsurprisingly, they also don't want to be the one inside the motor when it makes that choice.
“Although people tend to agree that everyone would be better off if AVs (autonomous vehicles) were utilitarian (in the sense of minimizing the number of casualties on the road), these same people have a personal incentive to ride in AVs that will protect them at all costs,” the researchers wrote in a paper published in Science magazine .
Six online surveys were conducted with a total of 1,928 participants and 76% responded it would be more moral for the vehicle to kill the driver than 10 pedestrians.
A prototype of Google's driverless car
However, the same participants were strongly opposed to the idea that government regulation that ensured the cars were programmed with the greater good in mind.
"Our results suggest that such regulation could substantially delay the adoption of AVs, which means that the lives saved by making AVs utilitarian may be outnumbered by the deaths caused by delaying the adoption of AVs altogether,” explained the team.
They went on to pose other important questions, such as whether the ages of the passengers or pedestrians should be considered.
A driverless car during testing at the headquarters of motor industry research organisation MIRA at Nuneaton in the West Midlands
Similarly, should manufacturers provide vehicles with different versions of a "moral algorithm" and let customers pick which one they wanted.
The team concluded by saying that building ethical machines is one of the most difficult challenges in artificial intelligence. But that public opinion and social pressure could change as more advancements are made.