Safely participating in traffic is much more complex than generally appreciated by society. A broad set of skills and actions are required, of which some are easier automated than other. Cruise-control systems, which maintain a constant velocity even while the route has multiple different inclinations, have been available for decades already. As technology advanced, so have the automated systems in vehicles, such as adaptive cruise-control, which also maintains safe spacing in addition to a constant velocity. Lane keeping systems, using cameras, sensors and steering control to keep a vehicle centered in its lane are a fact of the modern day citep{Shladover2016}. With technology advancing still, it is merely a matter of time before fully autonomous vehicles are widely available for the public.
Most experts agree that the introduction of fully autonomous vehicles will lower the number of traffic accidents. Based on current evidence, the number of traffic-related deaths will go down significantly as more and more self-driving cars are introduced into the market citep{Gogoll2016}. Some suggest that autonomous vehicles will decrease traffic accidents by $90$\% citep{Gao2014}. Considering that in the European Union in 2016, $26500$ people suffered a fatal accident while for every death an estimated $4$ people were permanently disabled, $8$ suffered serious injuries and another $50$ suffered minor injuries citep{EuropeanAccidents}, this could potentially change the world regarding traffic safety.Moreover, regarding the environment, the introduction of autonomous vehicles could reduce fuel usage and pollution by applying energy-efficient driving techniques, such as autonomous vehicles positioning themselves closely behind other vehicles citep{Gogoll2016}.
Don't use plagiarized sources.
Get Your Custom Essay on "Safely citep{Gogoll2016}. Some suggest that autonomous vehicles will..."
For You For Only $13.90/page!
Get custom paper
It would seem that the introduction of autonomous vehicles is inevitable citep{Shladover2016}, however this does pose a difficult ethical issue. As estimated by, among others, citet{Gao2014}, the amount of accidents could decrease with $90$\%, and thus meaning that autonomous vehicles must be programmed to respond to situations of necessity where causing damage, or not avoiding damage are inevitable. Consequently, autonomous vehicles must have a “moral exception” capability, which translates to pre-set algorithms regulating the modes of operation in emergency situations. Assuming that the responsible agent for the algorithms containing the “moral exception” are the manufactures, or the users if they are able to adapt the programming in some way, then damage caused by the vehicle can no longer be assessed and settled by stating that it was purely an instinctive and uncontrollable reaction citep{Coca2017}. Moreover, human reaction times are slow, and it takes a relatively long time for humans to switch from focusing on one thing to focusing on another, thus handing over control to the human passengers will often not be a good optioncitep{Nyholm2016}.
Hence, the autonomous vehicle will need to be programmed how to handle situations in which a collision is unavoidable. The aforementioned automatically gives rise to the following question:egin{description}centering{item it{“If an autonomous vehicle does cause an accident, then who is morally responsible?”}}% Who should determine the ethical settings in autonomous vehicles & who is morally responsible when an autonomous vehicle behaves as programmed, but harm is done?end{description}Over the past years, a multitude of possible responsible entities have been listed including the vehicle manufacturer, the manufacturer of a component used in the autonomous system, the software engineer who developed the code for the system and the road designer in the case of an intelligent road system that helps control the vehicle citep{Marchant2012}.In addition to these, for varying reasons, the user of an autonomous vehicle is also mentioned as responsible entity citep{Hevelke2014} citep{Gogoll2016}. Finally, another party which is often mentioned in literature, is the government, since governments can impose legal regulations regarding the programming and testing of autonomous vehicles citep{Ilkova2017} citep{Hevelke2014}.For this paper, the entities which will be evaluated, are divided into three groups.
In autoref{sec:users}, the moral responsibility of the users of autonomous vehicles is considered. Second, all the potentially responsible entities as indicated by citet{Marchant2012}, are summed into autoref{sec:manufacturers}, and regard the manufacturers of autonomous vehicles. Finally, a paragraph focuses on the legislators and their responsibilities in autoref{sec:government}.%————————————————-%section{Users of Autonomous Vehicles}label{sec:users}%————————————————-%In conventional vehicle accidents, the cause of the accident can generally be traced back to one or a combination of human error, vehicle malfunction and/or unavoidable natural conditions (such as weather conditions, road conditions, or a sudden animal on the vehicle path). Usually responsibility is allocated to either the driver or the vehicle manufacturer citep{Marchant2012}. In autonomous vehicles, however, the human isn’t necessarily the entity who is responding to possibly hazardous situations anymore, which immediately poses a problem. Aristotle argued that in order to be held responsible, two conditions should be met.
First the acting entity needs to be in control of the task of what he or she is doing. If one lacks control, he or she cannot be held responsible. Second the acting entity needs to have knowledge of what he or she is doing citep{Coeckelbergh2017}.
Translating this into the case of autonomous vehicles, it is only fair to question if the person in the vehicle is actually in control. Similarly, knowledge encompasses more than, for instance, knowing how to operate a car, but also having knowledge of the entire situation in which one finds him or herself, which is difficult to have if the passenger has been focused on other activities, such as work. A way to keep the ‘driver’ in a position holding both control and knowledge of the situation would be to have the user constantly watch the road and allow interference when necessary to avoid accidents. The liability of the driver in the case of an accident would be based on his failure to pay attention and intervene citep{Hevelke2014}. This however, raises other issues, especially regarding potential benefits that autonomous vehicles might bring, other than safety. For instance, it would be harder to achieve a ride-sharing model where multiple users use one vehicle.
With the necessity of a human being physically present in the car having both control over the vehicle and knowledge about the entire situation, it would not be possible to let the autonomous vehicle drive itself ’empty’, thus without the presence of a human, to pick up another member of the ride-sharing pool after the first user has arrived at his location, whereby increasing utility and efficiency. Moreover, giving the users a duty to intervene means that certain groups of society could not benefit from the introduction of self-driving cars. An example is that, with this duty, it would not be possible to give elderly and children more mobility without remaining dependent on other members of society.Besides negating the potential benefits, another issue regarding the aforementioned strategy can be derived from the aviation industry where high levels of automation are already common.
This however leads to a reduction of the pilot’s task load to a point where vigilance is negatively affected and boredom sets in citep{Cummings2013}. This same issue can occur in other automated vehicles, such as cars. This then leads back to the starting point, where a ‘driver’ could argue that he or she lacks knowledge of the entire situation, simply due to the issue of boredom. One suggested idea to overcome this problem is to transform the vehicle into a “moral proxy”, in which case the ‘driver’ is still the moral agent citep{Gogoll2016}.
This can be achieved by implementing a ‘Personal Ethics Setting’ (PES) into the vehicles. Thus the users of the vehicle can decide according to which ethical theory the vehicle should act when it finds itself in a situation in which a collision is unavoidable. This could mean that one uses sets a vehicle to value his life over all others, while another user may want to minimize legal liability and costs for him or herself, and yet another may set the vehicle to value all lives the same and minimize overall harm. A multitude of different ethical settings could be possible, however, this immediately poses the issue about how far preference settings can go regarding deciding who to harm in situations where harm is unavoidable citep{Lin2014}.
Moreover, self-interest has been identified as a powerful motive for human actionscitep{Cudd2017} citep{Kish2013}. This could be reflected in users PES. The statement that persons are primarily self-interested and that this could reflect in users PES is also affirmed in six Amazon Mechanical Turk studies regarding autonomous vehicle ethics published by citet{Bonnefon2015}. The questions in the first two studies were quite superficial, should the vehicle protect the driver at all costs or should it minimize the casualties on the road, and thereby sacrificing its own passengers.
In both studies, up to $76$\% of the participants thought it would be morally correct that an autonomous vehicle would rather kill one passenger than 10 pedestrians, i.e., an utilitarian approach.
However, over the next four studies, a more personal view was taken in the sense that the passengers were now yourself with family members. The main conclusion of these four studies were that even though participants still agreed that utilitarian autonomous vehicles were the most moral, they prefer a self-protective ethical model. % iets afsluitends / mini conclusie hier achteraan typen.
..Consequently, the findings of the studies are that regulation for autonomous vehicles may be necessary but also counterproductive. According to citet{Bonnefon2015}, regulation may provide a solution to this problem.\%————————————————-%section{Manufacturers}label{sec:manufacturers}%————————————————-%In a real-world accident today, a human driver usually has neither the time nor the information needed to make the most ethical or least harmful, split-second decisions. The programmer however will make life-and-death decisions under no truly urgent time-constraint and therefore incur the responsibility of making better decisions than human drivers reacting reflexively in surprise situations citep{Lin2015}. As stated in autoref{sec:users}, vehicle manufacturers are often already assigned responsibility when harm is done citep{Marchant2012}.
The moral algorithms that the manufacturers program into the autonomous vehicle however, could create a social dilemma. Although people tend to agree that everyone would be better off if the manufacturers program the vehicle in a utilitarian way (minimizing the total number of casualties on the road), these same people have a personal motivation to use an autonomous vehicles that will protect them at all costs cite{Bonnefon2015}. As a possible consequence, the allowance of autonomous vehicles on the market in which a PES can be chosen on a binary scale between utilitarian and self-interested will presumably lead to a prisoner’s dilemma citep{Gogoll2016}.In the classical prisoner’s dilemma, as shown in autoref{fig:prisoners_dilemma}, two players can choose separately to either cooperate or defect.
Both players know they can maximize the social good by cooperating, however each player individually has the opportunity to get a higher payoff if he defects while the other cooperates. Anticipating this line of thought, each player will defect, leading to the most unfavorable outcome of $(1, 1)$ in the lower right quadrant. It should be noted that the prisoners dilemma depicted in autoref{fig:prisoners_dilemma} is a great simplification of the real world automated vehicle dilemma, yet the principles remain the same. If vehicle manufacturers produce vehicles with, in this case, two ethical settings, one on each side of the spectrum between utilitarian and fully self-interested, then if every user chooses the utilitarian ethical setting, safety would be maximized. Assuming every user would choose the utilitarian setting, then a single individual would be better off by choosing the self-interested setting. Following this train of thought however, leads to every individual setting their vehicle to the self-interested setting, leading to the worst of the possible outcomescitep{Gogoll2016}.
egin{figure}h!centeringincludegraphicswidth=0.7columnwidth{prisoners_dilemma.png}caption{Prisoners Dilemma.
}label{fig:prisoners_dilemma}end{figure}The aforementioned leads to an ethical dilemma for car manufacturers. Since these manufacturers face fierce competition, and even though they program the autonomous vehicle, they basically have no choice in the ethical setting. If all vehicle users want a vehicle with the self-interested setting available, manufacturers will have no other option than to produce vehicles with this ethical setting citep{Gogoll2016}. If one then still insists that the manufacturer ultimately determined the ethical settings and is thus morally responsible if an autonomous vehicle behaves as programmed, but harm is still done, then from a liability perspective, this will deter manufacturers from developing autonomous vehicles in the first place citep{Marchant2012}. Moreover, if manufacturers would want to transfer some of these issues back to the users by means of a “Strict Liability” system, meaning that the user took the risk of using the vehicle and will therefore be held personally responsible for any accidents that are caused by it citep{Hevelke2014}, will basically bring us back to the discussion in autoref{sec:users}.
%The studies show that, in the end, few people would buy and ride in utilitarian autonomous vehicles though at the same time suggest that others do so citep{Bonnefon2015}.%————————————————-%section{Government}label{sec:government}%————————————————-%% 1.First paragraph; why this chapter (or group).
Thus why is it logical that we are addressing the users of autonomous vehicles as possibly responsible for ethical settings. The previous two sections argued that perhaps the users and manufacturers of autonomous vehicles are not parties to be responsible for the determination of the ethical settings in these vehicles. Given the principles of justice, stated by citet{Rawls1999}, the government does not concern itself with philosophical and religious doctrine but regulates individuals’ pursuit of their moral and spiritual interests in accordance with principles to which they themselves would agree in an initial situation of equality. By exercising its powers in this way the government acts as the citizens’ agent and satisfies the demands of their public conception of justice and its duty is limited to underwriting the conditions of equal moral and religious liberty citep{Rawls1999}. Also, the issue of determining an ethical setting calls for an independent party as orchestrator, which is predominantly free of conflicts such as economic competition and self-interest, to achieve moral equilibrium.\% 2. Arguments pro Contractarianism, a moral theory derived from the Hobbesian line of social contract thought, holds that persons are primarily self-interested.
, and that a rational assessment of the best strategy for attaining the maximization of their self-interest will lead them to act morally (where the moral norms are determined by the maximization of joint interest) and to consent to governmental authority citep{Cudd2017}. This implies that even though persons are principally self-interested, which was also established in autoref{sec:users} with the studies of citet{Bonnefon2015}, they still strive to consent to the regulations set by the government.\% 3.
Arguments con A liberal, however, might be not impressed by the advantages of government regulated ethical settings. He might hold that the government is nevertheless not justified to restrict the choices of reasonable people citep{Gogoll2016}. In a more general sense, libertarianism is a political philosophy that affirms the rights of individuals to liberty, to acquire, keep, and exchange their holdings, and considers the protection of individual rights the primary role for the government citep{Vallentyne2014}. With the government setting the ethical settings on an autonomous vehicle, the liberal right to personally decide on the ethical setting is removed.
%Dit doe je bij het stemmen\% 4.Should we look to this group when things go right but someone is killed? % Shared/collective responsibility% rewrite:The notion of collective responsibility, like that of personal responsibility and shared responsibility, refers in most contexts to both the causal responsibility of moral agents for harm in the world and the blameworthiness that we ascribe to them for having caused such harm. Hence, it is, like its two more purely individualistic counterparts, almost always a notion of moral, rather than purely causal, responsibility. But, unlike its two more purely individualistic counterparts, it does not associate either causal responsibility or blameworthiness with discrete individuals or locate the source of moral responsibility in the free will of individual moral agents. Instead, it associates both causal responsibility and blameworthiness with groups and locates the source of moral responsibility in the collective actions taken by these groups understood as collectives.
citep{Smiley2017}% 5.?Mini conclusion