Jessica Shea Choksey and Christian Wardlaw, May 05, 202, Levels of Autonomous Driving, Explained, https://www.jdpower.com/cars/shopping-guides/levels-of-autonomous-driving-explained
Johnson, Robert and Adam Cureton, “Kant’s Moral Philosophy”, The Stanford Encyclopedia of Philosophy (Fall 2022 Edition), Edward N. Zalta & Uri Nodelman (eds.), https://plato.stanford.edu/archives/fall2022/entries/kant-moral/.
Arfini, S., Spinelli, D. & Chiffi, D. Ethics of Self-driving Cars: A Naturalistic Approach.Minds & Machines 32, 717–734 (2022). https://doi.org/10.1007/s11023-022-09604-y
Hansson, S.O., Belin, MÅ. & Lundgren, B. Self-Driving Vehicles—an Ethical Overview.Philos. Technol. 34, 1383–1408 (2021). https://doi.org/10.1007/s13347-021-00464-5
Emerging Technology, October 22, 2015, Why Self-Driving Cars Must Be Programmed to Kill, https://www.technologyreview.com/2015/10/22/165469/why-self-driving-cars-must-be-programmed-to-kill/
Wm. David Solomon, “Double Effect,” The Encyclopedia of Ethics), Lawrence C. Becker, editor, https://sites.saintmarys.edu/~incandel/doubleeffect.html
Uzair M. Who Is Liable When a Driverless Car Crashes? World Electric Vehicle Journal. 2021; 12(2):62. https://doi.org/10.3390/wevj12020062
October 22, 2015
Title: The Ethics Dilemma of Self-Driving Cars: Do Algorithms Killing People Acceptable?
Course: GNST 3900 Ethics in Business
Table of Contents
Review of the Literature
Introduction and Overview
The rapid development of technology has paved the way for the emergence of self-driving cars, which offer convenience and the potential to reduce traffic accidents. However, with these new advancements, ethical dilemmas have come to the forefront of discussion. One such dilemma revolves around the acceptability of algorithms causing harm or even death to individuals in order to prevent accidents involving other parties. This research paper focuses on the ethical implications of level 5 autonomous driving, where human control is no longer required. Although this technology is not yet widely used, it is important to address these ethical concerns before its widespread adoption.
Relevance of the Topic Today:
While level 5 full self-driving technology remains a future application, discussing the ethical dilemmas associated with it is crucial in shaping the rules, code, and legal regulations surrounding this technology before it becomes commonplace.
Universal Significance of Full Driving Automation:
Self-driving cars eliminate human errors, such as drowsiness, mistakes, negligence, distractions, and impaired driving. As autonomous driving becomes more prevalent, vehicles can communicate with each other and road systems, forming a network that further reduces the likelihood of accidents.
Importance as a Contemporary Issue:
While fully autonomous vehicles offer numerous benefits, we must carefully consider the potential consequences and social-ethical implications of algorithmic decision-making in extreme cases. It is essential to ensure that any principles put forth align with universal human moral intuition. Prioritizing overall utility should not lead to a life that is anti-human or distorts moral and legal systems.
Discussions and Participants:
The issue is being discussed by legal regulators, car developers, and future car owners who recognize the transformative nature of full driving automation. These stakeholders aim to strike a balance that serves as the ethical foundation for the widespread adoption of new technologies.
Opponent: Algorithmic Killing is Unacceptable:
Opponents argue that the development of self-driving cars should be halted due to the ethically unacceptable consequences they present. They contend that when a self-driving car causes harm, it is not an accident but a deliberate act of algorithmic killing. The classic Trolley Problem is often cited to illustrate this viewpoint. Opponents believe that sacrificing innocent lives for the greater good distorts the moral and legal systems of society and should not be pursued.
Proponents: Algorithmic Killing is Acceptable:
Proponents of self-driving technology invoke the Doctrine of Double Effect as an ethically acceptable justification for self-driving crashes. They argue that if an action produces two effects, one intended and the other an unintended side effect, it can be morally permissible under certain conditions. Proponents emphasize that the algorithms used in autonomous driving technology aim to reduce traffic fatalities overall, and any resulting harm to individuals is an unintended consequence. They propose building a new framework for assigning responsibility in cases involving self-driving cars, moving beyond traditional blame-oriented approaches.
The potential development of autonomous vehicles, especially level 5 autonomous vehicles, has brought ethical dilemmas to the forefront. This paper seeks to explore the ethical and practical challenges associated with algorithmic killing in self-driving cars. The focus will be on the programming techniques employed to address scenarios akin to the trolley problem and the responsibility for decision-making in these situations. Opponents argue that algorithmic killing violates fundamental moral intuition and presents a liability gap. Proponents, on the other hand, propose the Doctrine of Double Effect as a means to justify the moral complexities of algorithmic killing and advocate for the establishment of a new framework for attributing responsibility.
As autonomous driving technology continues to advance, it holds the promise of significantly reducing the number of deaths and injuries caused by car accidents, ultimately benefiting humanity as a whole. However, it is crucial to approach the development of this technology with caution, ensuring that ethical considerations are thoroughly addressed and a robust regulatory system is established before its widespread adoption.
The trolley problem serves as a thought experiment that highlights the ethical challenges faced by self-driving cars. In scenarios where a collision is imminent, the programming of autonomous vehicles must determine the least harmful course of action. This decision-making process raises questions about how algorithms should prioritize the lives of different individuals, whether to prioritize the safety of the vehicle occupants, pedestrians, or other drivers.
Opponents argue that algorithmic killing is fundamentally unacceptable. They contend that when a self-driving car causes harm, it is not an accident but an intentional act perpetrated by the algorithm. They raise concerns about the inherent biases that may be present in the design and programming of self-driving cars, highlighting the potential for unjust outcomes based on factors such as age, race, or gender. Additionally, opponents argue that the liability for accidents involving self-driving cars becomes ambiguous, as there is no clear party to hold accountable—the car owner, the manufacturer, or the algorithm itself. This misalignment with traditional moral intuitions and the existing legal framework presents significant challenges.
Proponents, however, propose that algorithmic killing can be ethically justified using the Doctrine of Double Effect. This moral theory posits that an action that produces both intended and unintended effects may be justified if certain conditions are met. They argue that autonomous vehicles are not intentionally designed to harm individuals but rather to optimize overall safety. While harm to individuals may occur as an unintended consequence, it is a predictable outcome in certain scenarios. Proponents emphasize that the intentions behind autonomous vehicles are rooted in reducing traffic fatalities and improving the well-being of society as a whole.
In terms of responsibility, proponents suggest moving beyond traditional notions of assigning blame and punishment. They propose alternative approaches, such as implementing a no-fault insurance system or creating a victims compensation fund, wherein all owners of self-driving cars contribute regularly and receive compensation in the event of an accident. This shift in liability attribution aims to address the unique challenges posed by autonomous vehicles and ensure a fair and equitable system for all stakeholders.
In conclusion, the development of autonomous vehicles and the ethical dilemmas surrounding algorithmic killing necessitate careful examination and thoughtful consideration. Balancing the potential benefits of improved road safety and efficiency with the moral implications of algorithmic decision-making is crucial. While opponents argue that algorithmic killing violates basic moral intuition and presents challenges in assigning responsibility, proponents propose the Doctrine of Double Effect as a means to justify these dilemmas. Moving forward, it is essential to establish comprehensive regulations and frameworks that prioritize human well-being and address the ethical complexities of autonomous driving technology.