Quality of Information Provided by Artificial Intelligence Chatbots Surrounding the Management of Vestibular Schwannomas: A Comparative Analysis Between ChatGPT-4 and Claude 2

Daniele Borsetto, Egidio Sia*, Patrick Axon, Neil Donnelly, James R. Tysome, Lukas Anschuetz, Daniele Bernardeschi, Vincenzo Capriotti, Per Caye-Thomasen, Niels Cramer West, Isaac D. Erbele, Sebastiano Franchella, Annalisa Gatto, Jeanette Hess-Erga, Henricus P. M. Kunst, John P. Marinelli, Richard Mannion, Benedict Panizza, Franco Trabalzini, Rupert ObholzerLuigi Angelo Vaira, Jerry Polesel, Fabiola Giudici, Matthew L. Carlson, Giancarlo Tirelli, Paolo Boscolo-Rizzo

*Corresponding author for this work

Research output: Contribution to journalArticleAcademicpeer-review

Abstract

ObjectiveTo examine the quality of information provided by artificial intelligence platforms ChatGPT-4 and Claude 2 surrounding the management of vestibular schwannomas.Study designCross-sectional.SettingSkull base surgeons were involved from different centers and countries.InterventionThirty-six questions regarding vestibular schwannoma management were tested. Artificial intelligence responses were subsequently evaluated by 19 lateral skull base surgeons using the Quality Assessment of Medical Artificial Intelligence (QAMAI) questionnaire, assessing "Accuracy," "Clarity," "Relevance," "Completeness," "Sources," and "Usefulness."Main Outcome MeasureThe scores of the answers from both chatbots were collected and analyzed using the Student t test. Analysis of responses grouped by stakeholders was performed with McNemar test. Stuart-Maxwell test was used to compare reading level among chatbots. Intraclass correlation coefficient was calculated.ResultsChatGPT-4 demonstrated significantly improved quality over Claude 2 in 14 of 36 (38.9%) questions, whereas higher-quality scores for Claude 2 were only observed in 2 (5.6%) answers. Chatbots exhibited variation across the dimensions of "Accuracy," "Clarity," "Completeness," "Relevance," and "Usefulness," with ChatGPT-4 demonstrating a statistically significant superior performance. However, no statistically significant difference was found in the assessment of "Sources." Additionally, ChatGPT-4 provided information at a significant lower reading grade level.ConclusionsArtificial intelligence platforms failed to consistently provide accurate information surrounding the management of vestibular schwannoma, although ChatGPT-4 achieved significantly higher scores in most analyzed parameters. These findings demonstrate the potential for significant misinformation for patients seeking information through these platforms.
Original languageEnglish
Pages (from-to)432-436
Number of pages5
JournalOtology & Neurotology
Volume46
Issue number4
DOIs
Publication statusPublished - 1 Apr 2025

Keywords

  • AI
  • Acoustic neuroma
  • Artificial intelligence
  • ChatGPT
  • Chatbots
  • Claude
  • GPT
  • OUTCOMES
  • QAMAI
  • VS
  • Vestibular schwannomas

Fingerprint

Dive into the research topics of 'Quality of Information Provided by Artificial Intelligence Chatbots Surrounding the Management of Vestibular Schwannomas: A Comparative Analysis Between ChatGPT-4 and Claude 2'. Together they form a unique fingerprint.

Cite this