Mock OSCE Quality Assurance

Quality assurance in our first ever Mock OSCE

Written by Dr Thomas Kropmans, CEO Qpercom/

Quality assurance in terms of psychometric analysis of remote training & assessment is crucial. On Friday, the 5th of May 2022 we launched our first Mock OSCE exam as part of our website is a brand new venture that uses Qpercom’s advanced assessment solution to test the communication skills of medical students. By doing so, we have access to Qpercom’s psychometric analysis, similar to what Qpercom’s academic clients use for their quality assurance. In the first Mock OSCE, we established seven stations using Iversen, al’s. Codebook for rating clinical communication skills based on the Calgary-Cambridge Guide. BMC Med Educ 20, 140 (2020).

Seven ‘acting’ students went through the official launch exam to test procedures, scenarios and actors. Cronbach’s Alpha, although obsolete but widely used in medical education, was used to assess internal consistency. Six out of seven stations had an Alpha > 0.80 which is more than satisfying considering the low number of participants. The actor/examiner in station 7 didn’t complete all of the assessment forms during this launch. 

Taking into account the six remaining stations and the overall Standard Deviation of the participants’ performance (SD) being 18.4%, the Standard Error of Measurement (SEM) would be 8.02%. Although the average result for all stations was quite high, the lower bound of the 68% Confidence Interval (CI) would be 77.7% – 8.02% = 69.68% and the upper bound would be 85.72%, respectively 61.9 and 93.5% for a 95%CI. These 1 and 2 SEM intervals will be used for individual student results to see whether they passed or failed the Mock OSCE standard setting of 70% for final year healthcare students. The lower bound CI would refer to lower level students signing up for Mock-OSCE in the future.

A critic might disagree with the above-mentioned analysis because the number of stations is low or scores are too high whereas the numbers of participants is low too. All true, nevertheless, according to the classical psychometric theory performance of this test is promising and the purpose of is to be open and transparent to the participants;healthcare students in the first place but also simulated patients/actors and examiners. Many academic institutions still refer to these classical psychometric analysis.

The examiners in the seven stations were quite consistent in their high marks varying between 73.2% and 89.3% with exception to the examiner of station 3, who consistently marked the ‘acting students’ around the average of 58.9%.

In summary, basic psychometric analysis are available for the evaluation of marking sheets, examiners performance and OSCE setup. We expect high volume participants in the near future because of this unique and valuable training & assessment tool that allows remote participation of students, simulated patients and actors. Marks are too high but SEM and psychometrics are acceptable and marks will come down due course of the participation of high volume of students. In the case of a high volume of students, a higher number of stations will be commercially viable as well.

Leave a reply