Abstract
As artificial intelligence (AI) becomes increasingly integrated in teams, understanding the factors that drive trust formation between human and AI teammates becomes crucial. Yet, the emergent literature has overlooked the impact of third parties on human-AI teaming. Drawing from social cognitive theory and human-AI teams research, we suggest that how much a human teammate perceives an AI teammate as trustworthy, and engages in trust behaviors toward the AI, determines a focal employee's trust perceptions and behavior toward this AI teammate. Additionally, we propose these effects hinge on an employee's perceptions of trustworthiness and trust in the human teammate. We test these predictions across two studies: (1) an online experiment comprising individuals with work experience that examines perceptions of disembodied AI trustworthiness, and (2) an incentivized observational study that investigates trust behaviors toward an embodied AI. Both studies reveal that a human teammate's perceived trustworthiness of, and trust in, the AI teammate strongly predict the employee's trustworthiness perceptions and behavioral trust in the AI teammate. Furthermore, this relationship vanishes when employees perceive their human teammates as less trustworthy. These results advance our understanding of third-party effects in human-AI trust formation, providing organizations with insights for managing social influences in human-AI teams.
Original language | English |
---|---|
Number of pages | 26 |
Journal | Journal of Organizational Behavior |
DOIs | |
Publication status | E-pub ahead of print - 1 Jan 2025 |
Keywords
- artificial intelligence
- human-AI teams
- social cognitive theory
- trust
- trustworthiness
- SOCIAL COGNITIVE THEORY
- PRACTICE RECOMMENDATIONS
- INTERPERSONAL-TRUST
- ABUSIVE SUPERVISION
- JOB-ATTITUDES
- AUTOMATION
- PROPENSITY
- TRUSTWORTHINESS
- METAANALYSIS
- INFORMATION