Leveraging Narrative Feedback in Programmatic Assessment: The Potential of Automated Text Analysis to Support Coaching and Decision-Making in Programmatic Assessment

Balakrishnan R. Nair*, Joyce M. W. Moonen-van Loon*, Marion van Lierop, Marjan Govaerts

*Corresponding author for this work

Research output: Contribution to journalArticleAcademicpeer-review

Abstract

Introduction: Current assessment approaches increasingly use narratives to support learning, coaching and high-stakes decisionmaking. Interpretation of narratives, however, can be challenging and time-consuming, potentially resulting in suboptimal or inadequate use of assessment data. Support for learners, coaches as well as decision-makers in the use and interpretation of these narratives therefore seems essential. Methods: We explored the utility of automated text analysis techniques to support interpretation of narrative assessment data, collected across 926 clinical assessments of 80 trainees, in an International Medical Graduates' licensing program in Australia. We employed topic modelling and sentiment analysis techniques to automatically identify predominant feedback themes as well as the sentiment polarity of feedback messages. We furthermore sought to examine the associations between feedback polarity, numerical performance scores, and overall judgments about task performance. Results: Topic modelling yielded three distinctive feedback themes: Medical Skills, Knowledge, and Communication & Professionalism. The volume of feedback varied across topics and clinical settings, but assessors used more words when providing feedback to trainees who did not meet competence standards. Although sentiment polarity and performance scores did not seem to correlate at the level of single assessments, findings showed a strong positive correlation between the average performance scores and the average algorithmically assigned sentiment polarity. Discussion: This study shows that use of automated text analysis techniques can pave the way for a more efficient, structured, and meaningful learning, coaching, and assessment experience for learners, coaches and decision-makers alike. When used appropriately, these techniques may facilitate more meaningful and in-depth conversations about assessment data, by supporting stakeholders in interpretation of large amounts of feedback. Future research is vital to fully unlock the potential of automated text analysis, to support meaningful integration into educational practices.
Original languageEnglish
Pages (from-to)671-683
Number of pages13
JournalAdvances in medical education and practice
Volume15
DOIs
Publication statusPublished - 2024

Keywords

  • programmatic assessment
  • narrative feedback
  • learning analytics
  • text mining
  • international medical graduates
  • PERFORMANCE

Fingerprint

Dive into the research topics of 'Leveraging Narrative Feedback in Programmatic Assessment: The Potential of Automated Text Analysis to Support Coaching and Decision-Making in Programmatic Assessment'. Together they form a unique fingerprint.

Cite this