Self-Driving Cars To Kill The Pedestrian Or Harm The Driver?

Updated on

With autonomous and self-driving cars seemingly making daily strides forward and getting closer and closer to reality, moral conundrums are also arising. What should your car do in the event that all of a sudden a crowd of people ran in front of your car. Should the car plow through them knowing it was their fault to protect the driver? Or is it the car and its software’s responsibility to crash into a wall, telephone pole, or another car to avoid the unprotected crowd at the risk of the owner’s safety?

Self-Driving cars and the morality they will eschew or adhere to?

Would you get in a car that is designed to protect strangers instead of you as it pilots you to your destination? Surely you should be protected, hell, it’s your car. But then there is the guilt of surviving something that meant the lives of others. As you can see there is no easy answer, and if there is, you just may be a sociopath (we can smell our own).

The scenarios are endless. What if your car turned on to a street in the middle of a riot and the crowd was trying to pull you out of your car simply because of, let’s call it skin color. Does the car plow through this group that might beat you to death? There are only seconds to, if that long, to decide and the car will simply do as it’s programmed to, but who makes those decisions?

Many believe that these are issues that should be decided by the public but if that’s the case we’re forever and a day away from commercial driver-less vehicles. No consensus will ever be reached.

In a paper published this week in Science magazine a group of psychologists and computer science engineers explained in a paper what those in the United States feel the answer should be by looking over six separate surveys that were conducted over the last year.

The findings suggest that people are mixed to a point but generally answered altruistically. Let’s face it, a survey is a bit different than actually being in harm’s way yourself.

One of those scenarios had 10 pedestrians “appearing out of nowhere” and those surveyed asked if the car should swerve to save the group even though it would kill the occupant or “driver.”

76% of the 182 surveyed and asked about that scenario said that the car should kill the occupant even though they had done nothing wrong.

That’s cute but they weren’t in the car and as the surveys were extended to nearly 2,000 people a “social dilemma” not nearly as clear cut began to emerge.

And now self-interest comes to the forefront

When asked which car would people purchase the one that saved them or saved “the greater good,” not surprisingly, participants chose the model that would save them, or if that’s overly harsh, they “preferred the self-protective model for themselves.”

“Just as we work through the technical challenges, we need to work through the psychological barriers,” said Iyad Rahwan, associate professor of media arts and the sciences at the MIT Media Lab at the Massachusetts Institute of Technology and one of the many authors that saw publication of their findings this week.

These questions have been posed in some form for centuries regardless of technology. You have one boat, the tide is coming in on two islands you can save 100 senior citizens on one island or three kids on another island but not both, which is it?

“One missing component has been the empirical component: What do people actually want?” said Dr. Rahwan, who is a computational social scientist.

Well, someone is going to have to figure it out, frankly I just don’t want to know what my car has been programmed to do and people shouldn’t be given the choice. Thing is we need to make choices with this new technology.

“If you assume that the purpose of A.I. is to replace people, then you will need to teach the car ethics,” said Amitai Etzioni, a sociologist at George Washington University in and interview with the New York Times. “It should rather be a partnership between the human and the tool, and the person should be the one who provides ethical guidance.”

There is no easy answer, perhaps the cars should be programmed to make a coin flip in situations like this, the computers needed to make these cars drive themselves could certainly be programmed to just “flip a coin.” Is that the best way forward?

Leave a Comment