Thursday, October 30, 2008

Understanding Kirkpatrick Model: Training (Learning) Effectiveness Measurement

How important is it to evaluate and improvise on you work? As instructional designers, we surely do evaluate and review a lot of work done by SMEs, graphic designers, content developers, and courseware engineers to add value to the training and learnings that we develop; however, how often do we try to evaluate the trainings that we develop? When was the last time you evaluated the training that you developed? Lost. Are you?

Evaluating training/learning is the last phase (E - evaluate) of the instructional design model called ADDIE. Often considered as a cost- and time-consuming process, evaluation is frequently neglected. Let’s look at some statistics to get real. Only 3% of the trainings delivered in U.S. companies are evaluated. This small figure speaks louder considering that U.S. companies spend more than 300 billion dollars in training every year. It is amazing that the claim to deliver these trainings is that they will generate or save business revenue and such a claim is easily let loose by scrapping off the evaluation phase in the instructional design model.

Trainings are always seen under the cost column in an organization’s budget spreadsheet. Unless and until, evaluation is done, this cost cannot be justified. Increasingly, senior management is asking to justify the training costs and such justifications cannot be made at the spur of the moment.

To justify the business cost of the training (which includes the idle time of the employees gone in training and the time and effort spent on training development, delivery, and maintenance), it is imperative to evaluate how business results get impacted by trainings. In addition to supporting the business need for trainings, evaluating trainings helps identify which trainings are effective from the organization’s perspective and what features can be improved in the training. It helps identify relevant and redundant trainings too.

How to Evaluate a Training?

Kirkpatrick model, devised by Donald Kirkpatrick in 1959, provides a framework for evaluating trainings. The model outlines 4 levels at which training evaluation can be carried out. Each level builds on the other --- a given level looses its meaning if carried out without the predecessor level. Time, cost, and effort to carry out the evaluation increases with each successive level. Let’s quickly look at these levels.

Level 1: Reaction: Reaction of the Learners: At this level, you assess the reaction of the learners soon after the training gets over. Get to know how the learner feels about the training. Learner’s reaction can be captured in “Happy Sheets” or “Smile Sheets,” which might contain questions such as “How do you feel about the training, Are you satisfied with the training, Was the delivery mode effective, Are you satisfied with the way trainer delivered the training, Are the facilities amiable for this training, Do you think it was an effective training?” and so on. You can give learners more room for reflecting on training by asking them broader questions on what they think could have been better and if there were any gaps in the training module they attended. This level is the most basic level at which any training can be evaluated. Although, in my opinion, this level does not help in gauging the effectiveness of a training, it surely does tell whether the training was utterly ill-designed and/or ill-delivered to take it off the training map.

Level 2: Learning: Change in Learning: This level aims at evaluating whether there has been an increase in skills, attitudes, and knowledge of the learner post training. You can make such an evaluation by conducting pre- and post-training tests. This will, however, not be a measure of how the learner will perform on the job (OTJ). It only gives an indication if the learner has learned something (skill, attitude, and/or knowledge) to remember after the training. Because this level is above Level 1 evaluation, it requires more effort and cost as compared to the level 1 evaluation.

Almost all training programs, whether technical or soft skills, can be easily evaluated at Levels 1 and 2 vis-à-vis Levels 3 and 4, which are better suited for technical skills. Also, when compared to Levels 3 and 4, training effectiveness measurements at Levels 1 and 2 require less effort, cost, and time, both on the learner as well as on the organization’s part.

Level 3: Behavior: Behavior that Learner Exhibits OTJ: At this level, you need to assess whether the learner has taken the learning a step ahead of remembering (Level 2), i.e., whether the learner has taken the learning to the job. Such an evaluation can be made, 3 to 6 months after delivering the training, by using Behavioral Sheets in which inputs can be taken from the customers, peers, and supervisors of the learner. Behavioral Sheets can contain questions such as “Has the leaner been able to demonstrate XYZ skill in his work/interaction? Is the learning reflecting consistently in the learner’s behavior? Is there any particular behavior change that the learner is exhibiting in past 3-6 months? What is it? Is the learner able to help others with the skill set that he or she has recently acquired after training? Is the earner aware of the behavioral change he or she is demonstrating?” and so on. In case you are unable to get the appropriate response on the Behavioral Sheets or other such templates, try gathering this data from interviews and/or 360-degree feedback.

One problem that I see with Kirkpatrick’s model Level 3 is that following it you can measure effectiveness of a technical training more successfully than a soft skills training, however, it is equally important to understand the effect of behavioral change post technical training as is post a soft skills training. So, how do we evaluate effectiveness of a soft skills training w.r.t. behavioral change --- the only change that we look out for after a soft skills training? Thoughts invited.

Level 4: Results: Business Results: Fourth level of evaluation assesses how far the training has been effective to contribute to the increase in the business revenue of the organization. It would involve assessing how training has impacted the organizational performance measures such as percentages, turnover, retention, wastage, and so on. It might need to involve intervention of business intelligence systems wherein performance (learner) measurement systems can be integrated into business (management) data and the data computed for the impact that the learner’s improved performance has on the business bottomline. It is not easy to make evaluations at this level because the business results are a lot dependent on the external factors and evaluating the impact of the training on business results might get clouded. Howsoever tough evaluating results at this level be, this is the only level that actually matters to the higher management and is capable to justify the training costs.

Please note that Levels 3 and 4 require inputs from the line managers --- learner’s inputs might not be adequate. These are the levels that actually determine the effectiveness of the training and whether it is beneficial for the business cause or not.

My Take

If I was to come up with a quick action plan to determine training/learning effectiveness, I would consider the following for sure:

  • See how learners fare: How learners fare in the in-line exercises (of big modules) and assessments (of all modules) is a fair indication of how effective the training has been to the learner.

  • Ask for their experience: The learner can be asked to share the learning experience elaborately. Learner becomes the customer here and can judge our product (the learning) in a better way. Many a times, looking at the learning from second pair of eyes helps to identify the training gaps. Once you are aware of the gaps (this part evaluates the effectiveness of the training), you can fill them up to deliver a more wholesome training (this part is the impact of evaluating the training).

  • Monitor their work at regular intervals: Ask the learners as well as their peers, customers, and managers for their feedback on the skill imparted by the training at regular intervals, say 0.5, 1, 2, 3, 6 months. Ask them what is lacking in the skill that they learned in the training, how comfortable they are w.r.t. demonstrating that skill under different work conditions, and so on. This will help you evaluate how effective the training has been from learner’s perspective.I am still a bit confused as to how to evaluate training from business results perspective. To me it appears that filtering out the external factors is too big a challenge and requires careful administration of the impact of learner’s performance improvement on the business bottomline. Can an LMS do that? I am very doubtful. Suggestions, thoughts invited!
I personally feel that any training that does not contribute positively to the business results is futile, from organization’s point of view; and therefore, all trainings should be evaluated, at one level or the other, to determine their worth of delivery to the learners, by the organization.