Insights

Teach a Man to Fish – Evaluating Training Effectiveness


By Daragh O Brien
February 19, 2013
21min read

Introduction

Both in my past life working in a large corporate and in my on-going advisory and coaching engagements with clients I’m met with a few common themes:

  • Training is seen as an expense, a luxury (it’s often the first thing cut.
  • Training needs are based on the HR department or Training Department’s view of what the need is
  • Training is often done in response to a crisis, not a planned change or transition
  • Organisations send their key staff out for a few days off-site training (during which the same staff miss half the training as they answer calls, respond to emails, and generally don’t get a chance to switch off from the day job to focus on knowledge transfer).
  • People go back to the office and try to implement their new skills only to find that “that’s not the way we do things here”, leading to atrophy of knowledge
  • and loss of skill.

This sad state of affairs leads to ineffective training. All to often I’ve experienced (both as an educator and as a learner) the sense of lightbulbs not going on in the class. People go back thinking they have learned little, learned nothing, or with a fragmented understanding of what they have learned.

In short: the training has not been effective. Instead of teaching them to fish for fish, they’ve been left holding a rod that can only catch old boots and tin cans.

And thus we find the self-fulfilling prophecy of ineffective training. Without a focus on doing it effectively, the training has limited effectivness, is not applied correctly (and therefore can’t have the desired/required effect) and is branded as an expensive luxury. So staff are left relying on “best efforts” and learning from their peers and their understanding of processes and principles.

What can be done?

Effective training involves the imparting of knowledge. And Knowledge is similar to Data and Information in that it is an intangible asset. It can appear difficult to measure the value of a set of concepts or the formal development of a proficiency in skills. In this context we find practices of “winging it”, “instruction by observation”, or “learning by trial and error” are often seen as valid approaches to training and the development of skill. And in some circumstances they may well be. However, as W. Edwards Deming famously pointed out, one of the corner stones of a System of Profound Knowledge is a Theory of Knowledge – structure and principles around which experiential learning can be hung.

Which brings us to the question of how we can measure the effectiveness of training and learning. Luckily there has already been extensive work done on this topic. And the effectiveness of learning is (relaltively) easy to measure.

A Five Level Model for Learning Evaluation

Donald KirkPatrick first proposed a four level model of learning evaluation back in the 1970s. It was later extended by an additional level to reflect the need to tie learning outcomes to a clear bottom- line impact.

  • Level 1 of this model is the basic “customer satisfaction survey” type of evaluation that should be conducted for all training programmes. The logic is that effective learning will not have taken place if the learners were not satisfied with the trainer/training or environment. It is a quick and easy metric to measure and the target should be 100% satisfaction… anything else would indicate that there is a problem with the training, the training methods, or the trainers.
  • Level 2 of the model is the standard “proficiency test” that should be part of any training programme. It measures the effectiveness of how well the desired knowledge has been transferred. The assumption being that the training cannot be effective if the desired learning outcomes have not been met.
  • Level 3 looks at the behavioural changes that have come about as a result of the training. Bluntly: this level of evaluation looks at how the learners are putting the principles in to practice and are applying the information that was imparted in the training. If no behaviour changes are taking place, you might assume that the training was ineffective. If some changes are taking place but not as many as expected/required, the training was somewhat effective. That is why you can’t skip the first two levels!! Of course there can be many root causes for a failure to apply the learning (management systems acting contrary to what they were trained, peer pressure, boss pressure, lack of motivation to apply learning).
  • Level 4 looks at the results of the training, the KPIs that should be affected by changes in behaviour. This is why it is important to have a baseline of KPI performance before engaging in training. What is it that people should be able to do, and what do they need to know to be able to do that?
  • Level 5 ties all of this to the bottom line. What is the Return on Investment of the training? Based on the improved performance has there been an improvement in performance or productivity or an improved ability to identify and mitigate risks? What is the cost saving? Was the net bottom line impact positive or negative?

Of course, you don’t measure EVERY training course that is being run at every level. Sensible prioritisation needs to be applied. But the very fact that you have started measuring will change how you think about your training and knowledge transfer in your organisation. By capturing data about the effectiveness of training at different levels you can identify opportunities for improvement (Plan-Do-Study-Act) and begin to build evidence for the benefits and impacts of good quality training delivering good quality information.

However it is important to note that most organisations never assess effectiveness above Level 2. So most organisations only know that their staff have remembered their training materials or had slightly better than random chance luck on multiple choice questions when they had eliminated the two obviously wrong questions (that invariably are asked within a few minutes or hours of the training course being delivered). Very few organisations look at the Level 3 or higher, or they try to measure at Level 4 without being in a position to answer questions at Level 3.

  • If KPIs targets after training are not being met, what is the root cause? (Level 4 question)
  • If learners are not putting learning to work in the day-job, what is the root cause? (Level 3 question).

All too often the blame is placed on the trainer, the training, the materials without any objective metrics to support the root cause analysis.

The bottom line

The bottom line here is that it is relatively straightforward to measure the effectiveness of training at a number of levels. Training is effective when it influences behavioural change. But, as every student of physics knows, the act of measuring can influence the thing being measured. So by bringing in objective and structured measures for the effectiveness of training it becomes possible to apply Quality Principles to the management of and continuous improvement of How and What is taught, which in turn will improve the Who.

By tying the outcomes of training effectiveness to KPI changes and to a bottom line benefit it also makes it clear what the “Why” is of the training and it allows you to clearly illustrate the benefits of a structured training plan built around a “theory of knowledge” as opposed to cutting short term investment and letting people learn by trial and error.

Copyright Credit:

The image of the soft drink can in the river used above is © Copyright Kenneth Allen and licensed for reuse under this Creative Commons Licence


Related Insights

Newsletter

Keep up to date with all our latest insights, podcast, training sessions, and webinars.