Toward a better judgment of item relevance in progress testing

Xandra M. C. Janssen-Brandt*, Arno M. M. Muijtjens, Dominique M. A. Sluijsmans

*Corresponding author for this work

Research output: Contribution to journalArticleAcademicpeer-review

1 Citation (Web of Science)


Background: Items must be relevant to ensure item quality and test validity. Since "item relevance" has not been operationalized yet, we developed a rubric to define it. This study explores the influence of this rubric on the assessment of item relevance and on inter-rater agreement.

Methods: Members of the item review committee (RC) and students, teachers, and alumni (STA) reassessed the relevance of 50 previously used progress test (PT) items and decided about their inclusion using a 5-criteria rubric. Data were analyzed at item level using paired samples t-tests, Intraclass Correlation Coefficients (ICC), and linear regression analysis, and at rater level in a generalizability analysis per group.

Results: The proportion of items that the RC judged relevant enough to be included decreased substantially from 1.00 to 0.72 (p <0.001). Agreement between the RC and STA was high, with an ICC of >0.7 across items. The relation between inclusion and relevance was strong (correlation = 0.89, p <0.001), and did not differ between RC and STA. To achieve an acceptable inter-rater reliability for relevance and inclusion, 6 members must serve on the RC.

Conclusions: Use of the rubric results in a stricter evaluation of items' appropriateness for inclusion in the PT and facilitates agreement between the RC and other stakeholders. Hence, it may help increase the acceptability and validity of the PT.

Original languageEnglish
Article number151
Number of pages8
JournalBMC Medical Education
Publication statusPublished - 5 Sept 2017


  • Item relevance
  • Midwifery education
  • Progress testing
  • Rubric

Cite this