Abstract
The number of short 'life support' and emergency care courses available are increasing. Variability in examiner assessments has been reported previously in more traditional types of examinations but there is little data on the reliability of the assessments used on these newer courses. This study evaluated the reliability and consistency of instructor marking for the Resuscitation Council UK Advanced Life Support Course. Twenty five instructors from 15 centres throughout the UK were shown four staged video recorded defibrillation tests (one repeated) and three cardiac arrest simulation tests in order to assess inter-observer and intra-observer variability. These tests form part of the final assessment of competence on an Advanced Life Support course. Significant levels of variability were demonstrated between instructors with poor levels of agreement of 52-80% for defibrillation tests and 52-100% for cardiac arrest simulation tests. There was evidence of differences in the observation/recognition of errors and rating tendencies of instructors. Four instructors made a different pass/fail decision when shown defibrillation test 2 for a second time leading to only moderate levels of intra-observer agreement (kappa=0.43). In conclusion there is significant variability between instructors in the assessment of advanced life support skills, which may undermine the present assessment mechanisms for the advanced life support course. Validation of the assessment tools for the rapidly growing number of life support courses is required with urgent steps to improve reliability where required.
Original language | English |
---|---|
Pages (from-to) | 281-6 |
Number of pages | 6 |
Journal | Resuscitation |
Volume | 50 |
Issue number | 3 |
Publication status | Published - 1 Sept 2001 |