Can a self-driving car surpass our ethical standards?

By , 8 February 2017 at 15:17
Can a self-driving car surpass our ethical standards?
Digital Life

Can a self-driving car surpass our ethical standards?

By , 8 February 2017 at 15:17

The future of The Jetsons is near. Autonomous, semi-autonomous and completely self-driving cars are leaving sci-fi and even the prototype floor, venturing out onto real roads to fix human error. Yes, this means that it’ll soon be safe for us to text, stream and even nap while driving. But as these cars are still transporting humans, there’s a lot of human errors that make it challenging.

Autonomous flying has been around for decades — there are still one or two (hopefully alert) pilots involved in takeoffs and landings, but there are frankly way fewer planes in the air than cars on the road, following different very straight guided paths going to limited destinations. And then think of how much more preparation is still done to account for the humans boarding a plane than a land vehicle.

With 90 percent of car accidents caused by human error, there’s no doubt that roads will be safer.

Still, the variables of the self-driving car in the wilds of highways and small town avenues are far vaster with far, far more human variables on the ground than in the sky. And with all those extra humans, the human-free driving experience raises a lot of ethical questions.

Here’s just one question of countless: Should a car favor its master? If it has the chance to kill you and another passenger (and, really, itself) or five pedestrians — who should live while the others die?

A recent study in Science magazine found that we prefer to buy cars that aren’t touting morality but rather loyalty to its passengers.

Plus, when two autonomous vehicles crash, who is at fault? Is it the manufacturer or even the particular computer programmer, the car itself, the owners, other passengers or no one? The last possibility seems least likely, yet, until something bad happens, it won’t be court-tested. Or perhaps even before that, lawmakers will choose priorities, offering greater value to the passengers or the pedestrians — which brings along its own socioeconomic debates.

Of course, we aren’t so sure that artificial intelligence and machine learning is yet to the level of making such moral distinctions — after all robots think differently than humans… at least we think so. But then again it’s all in what they are taught right? But these auto-decisions will be much more likely made based on external factors that don’t cross the mind of the average human driver — speed, distance, weather conditions, as well as a blend of other sensor data and human input like road construction, accidents and emergency response proximity which can be plugged into an app like Waze — than ethical gut feelings.

Want to participate in this debate? The Massachusetts Institute of Technology actually has a research project called the Moral Machine which invites the public to judge different machine learning scenarios and their “acceptable” outcomes. This website already has found its way into our kitchen table debates.

Already decided? Tweet us your thoughts to @TefDigital and @JKRiggins!

And if you want a giggle, you can’t miss as The New Yorker brings these hypothetical ethical quandaries to the absurd.

previous article

Bot or not? | Part 2

Bot or not? | Part 2
next article

53% of the world is offline – and are mostly women

53% of the world is offline – and are mostly women