Abstract
In recent years, a major challenge for news outlets has been warding off toxic content from online spaces where they allow user contributions. The governance of these comments has primarily focused on identifying and banning unwanted comments. This chapter highlights a more recent development—the promotion of constructive comments— and concludes that the task of keeping toxicity out is mainly assigned to AI-based tools. Such models are specifically trained to find and filter out unwanted contributions, but these tools are not suited to identify and promote constructive comments. This responsibility is assigned to the human moderators, who must manually curate large numbers of user comments. The resulting collection of hand-picked contributions align with editorial guidelines, establishing a connection between editorial and user-generated content.
| Original language | English |
|---|---|
| Title of host publication | Governing the Digital Society Platforms Artificial Intelligence and Public Values |
| Publisher | Taylor and Francis |
| Pages | 63-81 |
| Number of pages | 19 |
| ISBN (Electronic) | 9781040780220 |
| ISBN (Print) | 9789048562718 |
| DOIs | |
| Publication status | Published - 2025 |
Keywords
- AI-Based Moderation
- Constructiveness
- Editorial Curation
- Online News
- Toxicity
- User Comments
Fingerprint
Dive into the research topics of 'Governing the “Third Half of the Internet”: The Dynamics of Human and AI-Assisted Content Moderation'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver