Computerized Quality of Life Assessment: a Randomized Experiment to Determine the Impact of Individualized Feedback on Assessment Experience

Daan Geerards, Andrea Pusic, Maarten Hoogbergen, Rene van der Hulst, Chris Sidey-Gibbons*

*Corresponding author for this work

Research output: Contribution to journalArticleAcademicpeer-review

Abstract

Background: Quality of life (QoL) assessments, or patient-reported outcome measures (PROMs), are becoming increasingly important in health care and have been associated with improved decision making, higher satisfaction, and better outcomes of care. Some physicians and patients may find questionnaires too burdensome; however, this issue could be addressed by making use of computerized adaptive testing (CAT). In addition, making the questionnaire more interesting, for example by providing graphical and contextualized feedback, may further improve the experience of the users. However, little is known about how shorter assessments and feedback impact user experience.

Objective: We conducted a controlled experiment to assess the impact of tailored multimodal feedback and CAT on user experience in QoL assessment using validated PROMs.

Methods: We recruited a representative sample from the general population in the United Kingdom using the Oxford Prolific academic Web panel. Participants completed either a CAT version of the World Health Organization Quality of Life assessment (WHOQOL-CAT) or the fixed-length WHOQOL-BREF, an abbreviated version of the WHOQOL-100. We randomly assigned participants to conditions in which they would receive no feedback, graphical feedback only, or graphical and adaptive text-based feedback. Participants rated the assessment in terms of perceived acceptability, engagement, clarity, and accuracy.

Results: We included 1386 participants in our analysis. Assessment experience was improved when graphical and tailored text-based feedback was provided along with PROMs (Delta=0.22, P

Conclusions: Using tailored text-based feedback to contextualize numeric scores maximized the acceptability of electronic QoL assessment. Improving user experience may increase response rates and reduce attrition in research and clinical use of PROMs. In this study, CAT administration was associated with a modest decrease in assessment length but did not improve user experience. Patient-perceived accuracy of feedback was equivalent when comparing CAT with fixed-length assessment. Fixed-length forms are already generally acceptable to respondents; however, CAT might have an advantage over longer questionnaires that would be considered burdensome. Further research is warranted to explore the relationship between assessment length, feedback, and response burden in diverse populations.

Original languageEnglish
Article number12212
Number of pages10
JournalJournal of Medical Internet Research
Volume21
Issue number7
DOIs
Publication statusPublished - 11 Jul 2019

Keywords

  • PATIENT-REPORTED OUTCOMES
  • RELIABILITY
  • SYSTEM
  • WHOQOL
  • WHOQOL-BREF
  • WORLD-HEALTH-ORGANIZATION
  • computer-adaptive testing
  • feedback
  • outcome assessment
  • patient-reported outcome measures
  • psychometrics
  • quality of life

Fingerprint

Dive into the research topics of 'Computerized Quality of Life Assessment: a Randomized Experiment to Determine the Impact of Individualized Feedback on Assessment Experience'. Together they form a unique fingerprint.

Cite this