Studyspark Study Document

Trolley Problems and Self Driving Cars Essay

Pages:8 (2362 words)

Sources:8

Subject:Ethics

Topic:Virtue Ethics

Document Type:Essay

Document:#94581761


The Limits of Deontology and Utilitarianism in the Trolley Problem

Introduction

The trolley problem is an old moral quandary that essentially has no wrong or right answer. It is a kind of worst case scenario in which one must choose the lesser of two evils. For example, a runaway trolley is set to crash and kill five people, but by throwing a lever you might spare those five but take the life of one innocent man crossing a connecting set of tracks. Is there a morally wrong or right answer to the question? And how does it apply in the case of self-driving cars? How should an engineer program an autonomous vehicle to respond to such a worst case scenario? Should the machine be programmed to swerve and take the life of an innocent man on the sidewalk so as to avoid taking the lives of five people dead ahead who have been stopped for some reason? Or is such a scenario even worth thinking about? The reality is that the trolley problem is more useful as a philosophical tool for identifying the differences between various ethical perspectives, such as utilitarianism and deontology (Carter). Outside of that exercise, it really has little merit. At the end of the day, the engineer of the self-driving car is going to have to decide upon what ethical perspective is guiding him and then program the machine accordingly. As Nyholm and Smids point out, other than the legal ramifications of how an engineer programs a self-driving car, the morality of solving the trolley problem is too elusive to solve: obviously it is important to take ethical problems seriously, but “reasoning about probabilities, uncertainties and risk-management vs. intuitive judgments about what are stipulated to be known and fully certain facts” is just not something that can be effectively left up to a machine that is guided by pre-programmed data (1). People make the mistake of thinking logic and reason can be applied to machine learning—but they forget that long before there were the deontological and utilitarian ethical frameworks there existed the classical ethical theory of virtue ethics, i.e., character ethics. It is the argument of this paper that character ethics is the best approach to solving moral dilemmas for humans. Leaving morality up to machines is a bad way for anyone to have to live.

The Self-Driving Car is Worse than a Trolley Problem

The trolley problem is an ethical puzzle. The self-driving car is a dangerous reality that is already happening in today’s world. As Himmelreich points out, the trolley problem pales in comparison to the ethics of autonomous driving. Essentially, the self-driving care is a deadly object hurtling forward through space and time, whose resistance depends upon the strength of the programmer’s skills and the technology’s efficacy. Accidents happen all the time. Teslas are notorious for crashing in auto pilot mode. They are sold as being fully self-driving, yet a German court recently found that Teslas are not self-driving and that marketing them as such is false advertising (Ewing). Does the world need more false assurances and false sense of safety? Should people really put so much trust in machines? The argument is that, “Planes basically fly themselves these days.” But the argument is disingenuous: plays may be largely flown on auto pilot, but there are always real pilots in the cockpit who are trained to fly the plane should they actually need to. It is when the auto pilot function cannot be overcome by the actual real life human pilots that bad things happen. See Boeing’s share price and reputation for evidence of that.

People look at the self-driving car and say that it makes life easier: they can sleep on the way to work or they can read a novel. The reality is that self-driving is really just a novel technological development that is not even yet perfected (and likely never will be, which is why pilots are still required for air travel). Trusting one’s life and the lives of others to a machine programmed by a programmer from some other part of the world, a programmer who will never be held accountable should an accident occur, is the height of absurdity. Human beings are capable of reason and have a free will, but they often act irrationally and seem desperate at times to give up the use of their free will and make themselves into slaves, whether of passions, of other men or of machines. From a virtue ethics standpoint, men should be reluctant to give up their own autonomy to a robot—yet with self-driving machines, they are asked to do just that. Thus, the self-driving car is a greater moral problem than the theoretical trolley problem. But even the trolley problem is problematic.

Some parts of this document are missing

Click here to view full document

…best (Pojman and Fieser). Putting all one’s effort into trying to create machines that think and react and respond like humans or like the most ideal or perfect human is simply naïve (Nyholm). Frankenstein tried to create the perfect man and instead created a monster—not because it was inherently corrupt but because the creator himself was imperfect and could not give to his creation that which he himself did not possess. Thus, every programmed machine is only as good as its programmer, and no programmer is beyond reproach because no programmer is perfect.

Conclusion

This paper has explained why self-driving technology is a negative contribution to humanity at large because it takes away from humankind the ability to manage his own destiny, instead putting his life in the hands of a machine. Once this is admitted the trolley problem essentially disappears and fades away. It is no longer a problem that has to be considered for engineers because the autonomous car itself is simply an unrealistic solution to the dangers and risks inherent in driving. Death while driving is an accepted risk that humans take each time they get on the road. In life, as on the road, the unexpected can happen. Attempting to program a machine to respond morally to the unexpected is like asking Victor Frankenstein to create a beautiful (in appearance) human being from the body parts of old cadavers. It is not realistic. Human beings make moral or immoral decisions, and in worst case scenarios they must live with the fact that sometimes it is impossible to decide or even to know what to do in any given situation. This is why virtue ethics is the best approach to living. Instead of trying to address moral problems from the standpoint of how an action impacts the greatest common good or whether it is within the realm of one’s duty, one merely need look at how the action impacts one’s own character. Does a particular action make one a better human being, in line with the virtues that define ideal goodness? If so, then it is an action worth taking. Men who are more interested in handing over their own sovereignty to a bunch of machines and hoping that a programmer programmed them correctly are men lacking in character and the noble qualities. Too many Tesla wrecks while drivers assumed they were safely driving on auto…


Sample Source(s) Used

Works Cited

Carter, Stacy M. "Overdiagnosis, ethics, and trolley problems: why factors other than outcomes matter—an essay by Stacy Carter." Bmj 358 (2017): j3872.

Ewing, J. “German Court Says Tesla Self-Driving Claims Are Misleading.” New York Times, 2020. https://www.nytimes.com/2020/07/14/business/tesla-autopilot-germany.html

Himmelreich, Johannes. "Never mind the trolley: The ethics of autonomous vehicles in mundane situations." Ethical Theory and Moral Practice 21.3 (2018): 669-684.

Marshall, Aarian. “What Can the Trolley Problem Teach Self-Driving Car Engineers?” Wired, 2010. https://www.wired.com/story/trolley-problem-teach-self-driving-car-engineers/

Nyholm, Sven. "The ethics of crashes with self?driving cars: A roadmap, I." Philosophy Compass 13.7 (2018): e12507.

Nyholm, Sven, and Jilles Smids. "The ethics of accident-algorithms for self-driving cars: An applied trolley problem?." Ethical theory and moral practice 19.5 (2016): 1275-1289.

Pojman, L. and J. Fieser. Ethics: Discovering Right and Wrong. Cengage, 2012.

Snow, Nancy E. "Neo-Aristotelian Virtue Ethics." The Oxford Handbook of Virtue. Oxford University Press, 2018. 321.

Cite this Document

Join thousands of other students and "spark your studies."

Sign Up for FREE
Related Documents

Studyspark Study Document

Heavy Recovery Vehicle Lighting Emergency

Pages: 7 (2159 words) Sources: 1+ Subject: Transportation Document: #84552501

It comes with a built-in tripod, so heavy recovery workers needing an auxiliary source of light can adjust its stream of light exactly where it is needed, and keep it there. The flashlight is 2.5 kg, 340 mm long and 160 mm in diameter. Wearable Lights: The Pelican manufacturer offers a hands-free light (#2680 "Headsup Lite") that is ideal for recovery professions who have their hands full dealing with emergencies

Join thousands of other students and

"spark your studies".