Culture

Autonomous Cars Face Moral Dilemma: How Much Control Should You Give A Smart Car?

By Adie Pieraz , Jun 27, 2016 05:20 AM EDT
Close

Artificial Intelligence is still in its infancy, and there is still plenty of ground to cover before it is perfected. Despite this, however, car manufacturers have already begun selling autonomous, or self-driving, vehicles. And with this, have come claims of accidents, luckily without any fatalities. But that does raise the question of what the autonomous car would do in such a situation - should it save the passenger, like it is programmed to do, or should it be capable of the choosing between the lesser of evils?

According to Today Online, a new research has been conducted, which included surveys from United States residents. Generally, those surveyed said that autonomous vehicles should be capable of making decisions for the greater good and not just those inside the vehicle. In other words, if the car is on its way to crashing against a wall, or through a herd of passengers, it should choose the wall.

In summary, robotic morality. So the question then becomes, should vehicle manufacturers take this into consideration?

"Figuring out how to build ethical autonomous machines is one of the thorniest challenges in artifical intelligence today," said a study by Jean-Francois Bonnefon of the Toulosse School of Economics. As reported by Japan Today, the study was also worked on by Azim Shariff of the University of Oregon and Iyad Rahwan of the Massachusetts Institute of Technology.

Bonnefon, Shariff and Rahwan admit that, at this point, there is no easy way to design an algorithm that can properly merge moral values and personal self-interest. They also share that cultural and geographical differences will also make one common moral ground difficult to reach.

At this point, their study offers one solution: that vehicle manufacturers provide options to the users, or decision rules, on what to do during these certain situations which they say are "low-probability events" anyway.

Joshua Green of Harvard's Center for Brain Science points out the root problem. He says that the problem, right now, is definitely more philosophical than it is technical. "Before we can put our values into machines," he stated, "we have to figure out how to make our values clear and consistent."

Related Articles

© 2019 ITECHPOST, All rights reserved. Do not reproduce without permission.

MORE IN ITECHPOST

Real Time Analytics