[Ed. – Because that’s definitely a situation in which all the variables can be imagined and programmed in advance, and it will never be the case that a human acting autonomously on the spot would be a far better decision-maker, on principle, than a preprogrammed computer. Note, however, that the study cited here didn’t focus on that. It assumed that most crucial point away — and yet that’s the ONLY point we should be talking about. Conclusion: we’re too morally backward for this problem. No self-driving cars. Full stop. Next question.]
Would you get into an automated self-driving vehicle, knowing that in the event of an accident, it might sacrifice your life if it meant saving the lives of 10 other people?
Autonomous vehicles (AVs), also known as self-driving vehicles, are already a reality. Initial guidelines from the National Highway Traffic Safety Administration regarding this technology are expected by this summer, and road tests are currently in progress across the country.
But one barrier to the widespread use of autonomous vehicles is deciding how to program these vehicles’ safety rules in the most socially acceptable and ethical way.
After a six-month survey, an international team of researchers published their findings Thursday in the journal Science and found the public has a conflicted view about the future of self-driving technology. …
While most were in favor of an outcome saving the most lives, survey results also indicated that participants would be less likely to purchase a car that followed this principle, with people instead preferring a car that would be more protective of themselves and their families.