The balancing act of assessment validity in interprofessional healthcare education

Hester Smeets*, Laurie Delnoij, Dominique M A Sluijsmans, Albine Moser, Jeroen van Merriënboer

*Corresponding author for this work

Research output: Contribution to conferenceAbstractAcademic

Abstract

Introduction
The need for interprofessional (IP) education and IP collaboration in healthcare practice to
improve the quality of care is widely recognized. In order to determine the level of IP
competencies in students within education, there is a need for IP assessments with a wellconsidered design. The literature offers few starting points for this and does not yet make
clear how IP assessment can lead to valid statements about students’ level of IP
competence. Previous studies mainly look at this issue from an instrument perspective, for
example the formulation of questionnaires. The current study takes a process approach,
where IP assessment is developed through a continuous and iterative process, in line with
modern theories of assessment validity (Kane, 2013). The current study focuses on two
aspects of IP assessment in particular, namely authenticity and scoring. We investigated
to what extent a prototype of an IP assessment is a solid precursor to internship practice
(i.e., authenticity) and to what extent the assessment provides information to determine
the level of IP competence among second-year students (i.e., scoring ). The following
research question was formulated:
- What are evidences and threats to validity for a prototype of an interprofessional
assessment for bachelor health care students?

Methods
We conducted a qualitative design-based study within the context of a university of
applied sciences. Two previous studies resulted in building blocks for a prototype of an IP
assessment, which consists of an IP team meeting in which students from different
professions discuss patient cases together. They also write care plans together on which
they are assessed as a group, and a reflection report on which they are assessed
individually. The prototype was evaluated in three group interviews. Students, teachers
and IP assessment experts took part in these interviews in which they evaluated the
prototype on the extent to which it is in line with IP practice and the extent to which it
provides relevant information for determining the level of IP competence. Data was
analyzed using a combination of deductive and inductive content analysis.
Results
Although both evidence for and threats to validity were mentioned, the threats refuting the
assessment’s validity prevailed. Evidence for the authenticity aspect was that the
assessment task, conducting a team meeting, is common in practice. However, its validity
was questioned because the task could be better structured and more “ideally”
performed. In addition, it turned out to be more difficult for some professions to connect
with the patient cases. Participants indicated that the assessment criteria were clear and
applicable to the current assessment design. However, they also indicated that it was not
yet clear how the current assessment design and criteria lead to a decision on IP
collaboration between students, given the individual nature of the assessment and focus
on the end product (care plan) rather than the process (team meeting).
Discussion And Conclusion
This study showed that validity evaluation exists of several balancing acts. The first
balancing act is between authenticity and complexity. Complex tasks, like IP tasks, require
a build-up, towards high complexity IP practice, and as a result it may be best to introduce
IP assessment in a structured way. The second balancing act is between team scoring and
individual scoring. In the IP context, collaboration is crucial, which implies that the group
process predominates. In the current context however, students seem to use more
individual strategies to solve the assessment task due to the individual focus in the
assessment. In higher education in general, this appears to be an important issue and the
literature shows that individual scores are still (too) often relied upon (Boud & Bearman,
2022). The third balancing act is that between authenticity and scoring, in which optimal authenticity might lead to threats to scoring and vice versa. Having simultaneous optimal
authenticity and scoring seems impossible, so it is important that validity is continuously
evaluated to ensure authentic yet fair IP assessments for all participating professions.
References
Boud, D. & Bearman, M. (2022). The assessment challenge of social and collaborative
learning in higher education, Educational Philosophy and Theory.
https://doi.org/10.1080/00131857.2022.2114346
Kane, M. T. (2013). Validating the Interpretations and Uses of Test Scores. Journal of
Educational Measurement, 50(1), 1-73. https://doi.org/https://doi.org/10.1111/jedm.12000
Original languageEnglish
Pages783-785
Number of pages3
Publication statusPublished - 2023
EventAMEE 2023 conference: Inclusive learning environments to transform the future - Glasgow, United Kingdom
Duration: 26 Aug 202330 Aug 2023
https://www.aofoundation.org/who-we-are/about-ao/news/2023/amee-glasgow-2023

Conference

ConferenceAMEE 2023 conference
Country/TerritoryUnited Kingdom
CityGlasgow
Period26/08/2330/08/23
Internet address

Cite this