Chatbot-fictionalism and empathetic AI: Should we worry about AI when AI worries about us?

  • Stacie Friend*
  • , Kris Goffin
  • *Corresponding author for this work

Research output: Contribution to journalArticleAcademicpeer-review

Abstract

We focus on recent proposals to interpret interactions with empathetic AI systems as games of make-believe, analogous to our engagement with fictions. Chatbots are increasingly designed to detect and emulate emotional nuance, resembling real human interactions. These applications have raised ethical concerns. Critics focus on the potentially harmful effects of people's tendency to anthropomorphize the chatbots - to respond to them as if they were real persons - potentially leaving users vulnerable to emotional dependency, exploitation, loss of autonomy and delusion. How worried should we be about these developments? According to some philosophers, not very. Chatbot-fictionalist maintain that one interacts with these chatbots analogous to our engagement with interactive fictions. We argue that although chatbot-fictionalism offers insights into human-AI interaction, it does not resolve the ethical concerns.
Original languageEnglish
Number of pages24
JournalPhilosophical Psychology
DOIs
Publication statusE-pub ahead of print - 28 Jun 2025

Keywords

  • Human AI-Interaction
  • fiction
  • mental fictionalism
  • make-believe
  • philosophy of AI
  • ethics of AI
  • MACHINES
  • IMAGINATION
  • RESPONSES
  • AGENT
  • REAL

Fingerprint

Dive into the research topics of 'Chatbot-fictionalism and empathetic AI: Should we worry about AI when AI worries about us?'. Together they form a unique fingerprint.

Cite this