Closing the Loop: Testing ChatGPT to Generate Model Explanations to Improve Human Labelling of Sponsored Content on Social Media

Thales Bertaglia*, Stefan Huber, Catalina Goanta, Gerasimos Spanakis, Adriana Iamnitchi

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingChapterAcademic

31 Downloads (Pure)

Abstract

Regulatory bodies worldwide are intensifying their efforts to ensure transparency in influencer marketing on social media through instruments like the Unfair Commercial Practices Directive (UCPD) in the European Union, or Section 5 of the Federal Trade Commission Act. Yet enforcing these obligations has proven to be highly problematic due to the sheer scale of the influencer market. The task of automatically detecting sponsored content aims to enable the monitoring and enforcement of such regulations at scale. Current research in this field primarily frames this problem as a machine learning task, focusing on developing models that achieve high classification performance in detecting ads. These machine learning tasks rely on human data annotation to provide ground truth information. However, agreement between annotators is often low, leading to inconsistent labels that hinder the reliability of models. To improve annotation accuracy and, thus, the detection of sponsored content, we propose using chatGPT to augment the annotation process with phrases identified as relevant features and brief explanations. Our experiments show that this approach consistently improves inter-annotator agreement and annotation accuracy. Additionally, our survey of user experience in the annotation task indicates that the explanations improve the annotators’ confidence and streamline the process. Our proposed methods can ultimately lead to more transparency and alignment with regulatory requirements in sponsored content detection.

Original languageEnglish
Title of host publicationExplainable Artificial Intelligence - 1st World Conference, xAI 2023, Proceedings
EditorsLuca Longo
PublisherSpringer, Cham
Pages198-213
Number of pages16
ISBN (Electronic)978-3-031-44067-0
ISBN (Print)9783031440663
DOIs
Publication statusPublished - 2023
EventWorld Conference on Explainable Artificial Intelligence - Lisbon, Portugal, Lisbon, Portugal
Duration: 26 Jul 202328 Jul 2023
https://xaiworldconference.com/2023/

Publication series

SeriesCommunications in Computer and Information Science
Volume1902
ISSN1865-0929

Conference

ConferenceWorld Conference on Explainable Artificial Intelligence
Abbreviated titlexAI 2023
Country/TerritoryPortugal
CityLisbon
Period26/07/2328/07/23
Internet address

Fingerprint

Dive into the research topics of 'Closing the Loop: Testing ChatGPT to Generate Model Explanations to Improve Human Labelling of Sponsored Content on Social Media'. Together they form a unique fingerprint.

Cite this