Slow response times undermine trust in algorithmic (but not human) predictions

Emir Efendic*, Philippe P. F. M. Van de Calseyde, Anthony M. Evans

*Corresponding author for this work

Research output: Contribution to journalArticleAcademicpeer-review

468 Downloads (Pure)

Abstract

Algorithms consistently perform well on various prediction tasks, but people often mistrust their advice. Here, we demonstrate one component that affects people's trust in algorithmic predictions: response time. In seven studies (total N = 1928 with 14,184 observations), we find that people judge slowly generated predictions from algorithms as less accurate and they are less willing to rely on them. This effect reverses for human predictions, where slowly generated predictions are judged to be more accurate. In explaining this asymmetry, we find that slower response times signal the exertion of effort for both humans and algorithms. However, the relationship between perceived effort and prediction quality differs for humans and algorithms. For humans, prediction tasks are seen as difficult and observing effort is therefore positively correlated with the perceived quality of predictions. For algorithms, however, prediction tasks are seen as easy and effort is therefore uncorrelated to the quality of algorithmic predictions. These results underscore the complex processes and dynamics underlying people's trust in algorithmic (and human) predictions and the cues that people use to evaluate their quality.

Original languageEnglish
Pages (from-to)103-114
Number of pages12
JournalOrganizational Behavior and Human Decision Processes
Volume157
DOIs
Publication statusPublished - Mar 2020

Keywords

  • Response time
  • Judgment and decision making
  • Prediction
  • Algorithm aversion
  • Human-computer interaction
  • DECISION TIME
  • PEOPLE
  • JUDGMENT

Cite this