TY - JOUR
T1 - Design Implications for Explanations: A Case Study on Supporting Reflective Assessment of Potentially Misleading Videos
AU - Inel, Oana
AU - Duricic, Tomislav
AU - Kaur, Harmanpreet
AU - Lex, Elisabeth
AU - Tintarev, Nava
N1 - Funding Information:
Delft University of Technology funded the user study that we conducted for this manuscript. Delft University of Technology is also a Frontiers Institutional member. This work is partially supported by the Delft Design for Values Institute, the H2020 project TRUSTS (GA: 871481) and the “DDAI” COMET Module within the COMET–Competence Centers for Excellent Technologies Programme, funded by the Austrian Federal Ministry for Transport, Innovation and Technology (bmvit), the Austrian Federal Ministry for Digital and Economic Affairs (bmdw), the Austrian Research Promotion Agency (FFG), the province of Styria (SFG) and partners from
Publisher Copyright:
© Copyright © 2021 Inel, Duricic, Kaur, Lex and Tintarev.
PY - 2021
Y1 - 2021
N2 - Online videos have become a prevalent means for people to acquire information. Videos, however, are often polarized, misleading, or contain topics on which people have different, contradictory views. In this work, we introduce natural language explanations to stimulate more deliberate reasoning about videos and raise users' awareness of potentially deceiving or biased information. With these explanations, we aim to support users in actively deciding and reflecting on the usefulness of the videos. We generate the explanations through an end-to-end pipeline that extracts reflection triggers so users receive additional information to the video based on its source, covered topics, communicated emotions, and sentiment. In a between-subjects user study, we examine the effect of showing the explanations for videos on three controversial topics. Besides, we assess the users' alignment with the video's message and how strong their belief is about the topic. Our results indicate that respondents' alignment with the video's message is critical to evaluate the video's usefulness. Overall, the explanations were found to be useful and of high quality. While the explanations do not influence the perceived usefulness of the videos compared to only seeing the video, people with an extreme negative alignment with a video's message perceived it as less useful (with or without explanations) and felt more confident in their assessment. We relate our findings to cognitive dissonance since users seem to be less receptive to explanations when the video's message strongly challenges their beliefs. Given these findings, we provide a set of design implications for explanations grounded in theories on reducing cognitive dissonance in light of raising awareness about online deception.
AB - Online videos have become a prevalent means for people to acquire information. Videos, however, are often polarized, misleading, or contain topics on which people have different, contradictory views. In this work, we introduce natural language explanations to stimulate more deliberate reasoning about videos and raise users' awareness of potentially deceiving or biased information. With these explanations, we aim to support users in actively deciding and reflecting on the usefulness of the videos. We generate the explanations through an end-to-end pipeline that extracts reflection triggers so users receive additional information to the video based on its source, covered topics, communicated emotions, and sentiment. In a between-subjects user study, we examine the effect of showing the explanations for videos on three controversial topics. Besides, we assess the users' alignment with the video's message and how strong their belief is about the topic. Our results indicate that respondents' alignment with the video's message is critical to evaluate the video's usefulness. Overall, the explanations were found to be useful and of high quality. While the explanations do not influence the perceived usefulness of the videos compared to only seeing the video, people with an extreme negative alignment with a video's message perceived it as less useful (with or without explanations) and felt more confident in their assessment. We relate our findings to cognitive dissonance since users seem to be less receptive to explanations when the video's message strongly challenges their beliefs. Given these findings, we provide a set of design implications for explanations grounded in theories on reducing cognitive dissonance in light of raising awareness about online deception.
KW - reflective assessment
KW - explanations and justifications
KW - reflection triggers
KW - online videos
KW - controversial topics
KW - online video deception
KW - INFORMATION
KW - JUDGMENT
U2 - 10.3389/frai.2021.712072
DO - 10.3389/frai.2021.712072
M3 - Article
C2 - 34651121
VL - 4
JO - Frontiers in artificial intelligence
JF - Frontiers in artificial intelligence
M1 - 712072
ER -