Abstract
We focus on recent proposals to interpret interactions with empathetic AI systems as games of make-believe, analogous to our engagement with fictions. Chatbots are increasingly designed to detect and emulate emotional nuance, resembling real human interactions. These applications have raised ethical concerns. Critics focus on the potentially harmful effects of people's tendency to anthropomorphize the chatbots - to respond to them as if they were real persons - potentially leaving users vulnerable to emotional dependency, exploitation, loss of autonomy and delusion. How worried should we be about these developments? According to some philosophers, not very. Chatbot-fictionalist maintain that one interacts with these chatbots analogous to our engagement with interactive fictions. We argue that although chatbot-fictionalism offers insights into human-AI interaction, it does not resolve the ethical concerns.
| Original language | English |
|---|---|
| Number of pages | 24 |
| Journal | Philosophical Psychology |
| DOIs | |
| Publication status | E-pub ahead of print - 28 Jun 2025 |
Keywords
- Human AI-Interaction
- fiction
- mental fictionalism
- make-believe
- philosophy of AI
- ethics of AI
- MACHINES
- IMAGINATION
- RESPONSES
- AGENT
- REAL