Self-Driving Cars Are Safe

Smart tech is starting to take over our lives. With the expected spreading of self-driving technology to the consumer market cars in the near future, people are starting to wonder if this new technology is actually safe, or our lives will become more dispensable.

After all, who can guarantee us that the high-tech self-driving cars of the future will not be programmed to dispose of us for a greater good? What if the greater good is actually a little evil? Maybe we can expect a coming technological dictatorship like in some Sci-Fi dystopias. Maybe Artificial Intelligence machines will make decisions on our individual worth to the mankind.

This kind of questions might be a little depressing for ethicists who study the Artificial Intelligence's future impact on our culture and civilization. But, whether we like it or not, these subtle questions, while depressing, should now be asked by those who try to foresee the future of the human race in a world increasingly managed by intelligent machines.

Once the advances in technology will allow making the step from assisted-driving cars to self-driving cars, it will be only a matter of time until the intelligent cars will become compulsory. The reasons can vary, but most likely the self-driving cars will be considered safer than human driven cars. But, as scientific ethicists from Birmingham, at the University of Alabama, put the question: what will happen if a self-driving car out of control risks hitting a group of children?

You will definitely not like the answer. It might even show you. But, according to the scientific ethicists, when the AI system installed on your self-driving car, it will have to decide between saving you and the kids, you'll be the one sacrificed in the end.

In the hypothetical situation when the car lost control on some ice on the road and is now heading towards a group of schoolchildren on the sidewalk, it is possible and even probable that the car's AI software, in order to save the children, will choose to divert the car into a wall, killing you in the process.

What possible criteria would a self-driving AI system use to come to such decision? From a utilitarian point of view, we should always choose the action that can produce the greatest happiness for the larger number of individuals.

And, since you are alone in the car or even if you have passengers, it's most likely to be in inferior number that the schoolchildren group on the sidewalk, then that's bad news for you.

Certainly, there are many possible twisted kinks of the situations encountered and the decisions to be made by the self-driving AI systems.

For instance, what would the AI system choose in case that the person in the self-driving car is some celebrity, a person that society deems superior, like the Queen of England for example? Is it always a life against the other? A life against many? 

Or do celebrities merit some additional consideration? And on what criteria to decide on the celebrity being more important than the other? Is it wealth? Social status? Contribution to the betterment of the humankind? Is a genius' life like Einstein's more valuable than the Queen?

Definitely, these are all questions that deserve some deep consideration.

© 2024 iTech Post All rights reserved. Do not reproduce without permission.

Company from iTechPost

More from iTechPost