The Ethics of Making and Using Self-Driving Cars
As a part of our continued exploration of the morality behind advances in technology, today we take a look at self-driving cars. Some features of these autonomous vehicles have already been in use for some time: cars that park themselves, intelligent cruise control, and other advanced features may already be in a car you own or have used before. Fully autonomous vehicles are already on the road, being actively tested by major car and tech corporations, companies which are looking at a future where our cars drive us instead of the other way around.
While this all sounds quite cool and very exciting, a recent article points out that there are more complicated issues at hand than simply making cars that have the technological capability of getting themselves from point A to point B. In the end, a self-driving car is an advanced robot, a piece of technology that has to make important decisions on the fly and think critically to rapidly adapting situations. When you consider that human passengers and other living drivers come into the mix, these can be serious issues of life and death.
The idea of a self-driving car starts to get complicated when we are forced to consider what happens when things go wrong. What happens when an accident becomes unavoidable? How should the automated car act? Should it work to minimize loss of life? Protect the occupants of the self-driving car? The other car? If an impact would likely kill one of the occupants but not the others, can it be trusted to make that decision?
Another article, entitled “How to Help Self-Driving Cars Make Ethical Decisions” helps to shed light on some of these questions, while also exploring further queries related to the idea of entrusting our safety and ethical decisions to robotic cars.
One scenario explored by those researching and developing an ethical code for self driving cars involves a possible collision with a group of pedestrians. In this scenario, a self-driving car finds itself on a collision course with ten people on the sidewalk, all of whom would likely be killed by a collision. The alternative is for the car to swerve into a wall, which would save the ten pedestrians but sacrifice the passenger of the driverless car. Scientists and researchers asked this very same question we encourage you all to answer now: should the car choose to minimize loss of life (killing the one passenger instead of ten people) or should it protect itself?
Statistically, cars operated by humans have a higher rate of accidents because of human error. At the same time, if self-driving cars are designed to be purely altruistic to the point of sacrificing their passengers to save lives, would anyone buy them? Are we caught in a cycle in which the only way to avoid more accidents is through self-driving cars, but we won’t be able to sell self-driving cars because they are programmed to potentially sacrifice their passengers?
The list of problematic decisions goes on and on, including whether self-driving cars should act differently when children are on board, whether they should take into account the relative safety of different vehicles and which are more likely to survive an accident, and any number of other, very complicated questions. In the end, are ideas like self sacrifice and general safety better left to human operators, or can they be reduced to mathematical equations and other scientific data points that allow cars to be programmed with the capacity to make these incredibly important decisions?
The main takeaway from the article discussed today is that part of the research and development of self-driving technology involves the input of public opinion. Discussions we have now will impact what kinds of decisions they self-driving cars will make. Start your discussion today with your own family, and look towards the future we will all share.
If you enjoyed your discussion of this article, we recommend another tech ethics piece, “The Challenge of Developing Robot Morality.”
No Comment