Responsible Guidelines for Authorship Attribution Tasks in NLP

Vageesh Saxena*, Aurelia Tamò-Larrieux, Gijs van Dijck, Gerasimos Spanakis

*Corresponding author for this work

Research output: Contribution to journalArticleAcademicpeer-review

7 Downloads (Pure)

Abstract

Authorship Attribution (AA) approaches in Natural Language Processing (NLP) are important in various domains, including forensic analysis and cybercrime. However, they pose Ethical, Legal, and Societal Implications/Aspects (ELSI/ELSA) challenges that remain underexplored. Inspired by foundational AI ethics guidelines and frameworks, this research introduces a comprehensive framework of responsible guidelines that focuses on AA tasks in NLP, which are tailored to different stakeholders and development phases. These guidelines are structured around four core principles: privacy and data protection, fairness and non-discrimination, transparency and explainability, and societal impact. Furthermore, to illustrate a practical application of our guidelines, we apply them to a recent AA study that targets identifying and linking potential human trafficking vendors. We believe the proposed guidelines can assist researchers and practitioners in justifying their decisions, assisting ethical committees in promoting responsible practices, and identifying ethical concerns related to NLP-based AA approaches. Our study aims to contribute to ensuring the responsible development and deployment of AA tools.
Original languageEnglish
Article number16
Pages (from-to)1-28
Number of pages28
JournalEthics and Information Technology
Volume27
Issue number2
DOIs
Publication statusPublished - Jun 2025

Keywords

  • responsible AI
  • authorship attribution (AA)
  • natural language processing (NLP)
  • privacy & data protection
  • fairness & non-discrimination
  • transparency & explainability
  • societal impact

Fingerprint

Dive into the research topics of 'Responsible Guidelines for Authorship Attribution Tasks in NLP'. Together they form a unique fingerprint.

Cite this