|

Improving the public’s perception of autonomous vehicles by communicating the consistency of autonomous vehicle algorithms

Author(s): Walker, Walker, Turpin, Muda, Trick, Fugelsang, Biaek

Slidedeck Presentation:

4A Walker

Abstract:

Background:

Autonomous vehicles (AVs) are projected to reduce the frequency and severity of motor vehicle accidents—already, many pilot projects have shown that AVs are involved in significantly fewer collisions per million miles than human-piloted vehicles. However, people still show an aversion to the presence of AVs on roads, with early work suggesting that AVs need to be five times safer than human-piloted vehicles in order for them to be considered acceptable.

Aims:

The present study attempts to assess peoples’ aversion to AVs in the context of moral dilemmas. Specifically, we assess the role that perceptions of predictability play in peoples’ judgments of AVs versus human drivers, and whether explaining the consistency of AV algorithms reduces AV aversion.

Methods:

Moral dilemmas were presented to participants in which a vehicle (piloted by an AV or human) was driving down a hill when its brakes failed. If no action was taken, five pedestrians in a crosswalk would be hit. Alternatively, the vehicle could turn and hit a stranger and their parked car. Participants were told the pilot’s choice and then judged them on several dimensions (e.g., predictability, morality, blame). In Study 1 (N = 1,139), participants received an explanation highlighting the consistency of AVs. In Study 2 (N = 1,533), a positive outcome was added to the list of outcomes. In Study 3 (N = 473), control of the vehicle was transferred from the AV to the human pilot—or vice versa—before an action was taken.

Results:

Studies 1-3 provide evidence of AV aversion. Despite performing the same actions, AVs were judged as less moral, less acceptable, and associated with more blame and harm than human drivers. Importantly, interactions in Studies 1 and 2 revealed that this aversion was reduced when highlighting the consistency of AV algorithms. Furthermore, in Study 3, we observe an effect of takeover such that outcomes involving a change in control were viewed as less predictable and judged more negatively regardless of who was in control after the switch.

Discussion:

Explaining the consistency of AV algorithms increased the perceived predictability of AVs and reduced AV aversion, such that AVs were judged similarly to human drivers. Thus, interventions that enhance the perceived predictability of AVs represent a promising avenue for mitigating AV aversion. Additionally, the wholesale negative reaction to takeover actions in Study 3 reveals another source of AV aversion, perhaps because it is unclear who to blame or what may have happened had control not shifted. Subsequent research aims to investigate how reducing other sources of ambiguity could further reduce AV aversion—for example, outlining the statistical chances of the different outcomes when an AV versus a human driver is in control.

Conclusions:

Despite being involved in fewer collisions, past work shows that people have an aversion to AVs on roads. Using moral dilemmas, we demonstrate that interventions highlighting the predictable nature of AV algorithms reduce this aversion, suggesting one promising strategy for increasing the publics’ trust in these technologies and reducing barriers to their success and eventual mass adoption.