Training Tips for Ships #16: Using Student Exam Results to Measure OUR Performance: Part 2
In last month’s Training Tips for Ships, we began the discussion of how we can look at our existing training data to measure our success as trainers and training organizations.
The data in question are the results from the multiple-choice exams we deliver to our trainees. We deliver these exams all the time, and we gather a tremendous amount of data (answers) over the years. Yet typically we only use that data to determine whether our trainees have gathered the knowledge we are trying to impart. There is so much more we can do. It is time to look at this data from a different perspective: to evaluate OUR performance.
In practical terms, these analyses will be difficult if you deliver paper-based exams, but easy to do if you deliver exams online and can download the responses into a spreadsheet. It is even easier if your LMS automatically supports the generation of these metrics. Thus if you deliver paper-based exams it may be time to consider adopting an LMS or some other simple on-line exam delivery tool that allows you to download the responses.
In the previous article we looked at two simple, but remarkably useful ways to slice and dice the data to reveal hidden indicators of learning, performance and even safety. The first was to report on the performance of questions grouped by the competency they cover. The second was to use the data to identify common misconceptions and misunderstandings held by our employees. In this edition of Training Tips for Ships we will add some additional ways of looking at our data to help determine the health of our training program.
One of the simplest approaches to looking at multiple-choice exam responses is to determine, on average, how well each question is being answered. This is easy to do and the value is immediately apparent as it allows us to identify those questions which are being performed poorly across the organization. Once we have this data then the next step is to determine whether there is an issue with the question or an issue with the training delivery that the question is related to.
Another metric to look at is how long trainees spend, on average, answering each question. This can be highly revealing; a question with a long dwell time may indicate that the question is difficult to understand (which requires attention) or that it simply requires deep consideration. Looking at this coupled with the performance of that question can help determine the difference between the two and guide any needed updates to the question or the learning materials.
Finally, a similar metric is whether the trainee selected their final answer for a question as their first ‘click’, or first selected one or more previous choices before settling on their final answer. Some LMSs and exam delivery systems record and report on this information. This reveals information about how certain the trainee felt about their answer. If they were less certain they are more likely to have equivocated about their response. Like the previous metric, it may be that questions which exhibit more of this equivocation are difficult to understand or it could be that the associated learning materials are ineffective. Either way, it says something about our training and assessment and needs to be evaluated by a trainer.
In practice, an excellent approach is to gather these metrics along with those outlined in the previous edition of Training Tips for Ships as part of a continuous improvement program for training. Metrics can be gathered and evaluated once or twice per year, and corrections put in place to address those issues found. The next time they are gathered we can look at how the metrics have improved (or degraded), and another round of targeted improvements can be made. This regular, ongoing approach helps keep the training team focused on identifying training issues, early indicators of safety, and optimal training outcomes.
Thanks for reading, keep healthy, and sail safely!