Reviving Training's DOA ROI

ROI is a funny thing. Why is it that no one directs the IT department to "get us the return-on-investment on our e-mail system," yet corporations think nothing of forcing the training department to project ROI on every learning program and technology deployed? And a down economy only intensifies the scrutiny.

It's to be expected that the CEO of a large company that's spending millions on training and developing employees might wonder what he or she is getting in return, especially when the pressure is on to cut expenses in order to meet analyst's quarterly estimates. However, as with most "soft" issues, it can be difficult to show tangible results from such intangible processes. Although employee training has been an integral part of the corporate environment a heck of a lot longer than e-mail, it hasn't reached the "we don't need to measure it - we know it's invaluable" status in most organizations.

The perception among many senior leaders that training isn't "mission-critical" is often the reason why the training budget is the first on the chopping block when companies look to reduce expenses. But as previous ASTD/i4cp studies have shown, cutting the training budget is a practice most associated with low-performing organizations; high performers generally keep it intact. If that's the case, why is training ROI all too often dead-on-arrival in companies today?

While most training professionals recognize the importance of evaluation, few have mastered the art, according to a new study by ASTD and i4cp. The study focused on whether or not learning evaluation actually pays off for organizations and, if so, which tactics are most effective. Only about one-quarter of respondents to the survey said they agreed that their organization got a solid "bang for their buck" from their training evaluation efforts. Reactions were only slightly better when respondents were asked if their learning evaluation techniques helped meet learning and business goals.

But, presumably pressed by senior management, training departments aren't giving up. The study finds that most companies continue to spend time and money on some form of evaluation, although it's clear that many are uncertain how to go about it and not sure what to do with the results.

How are organizations evaluating learning?

The famous (some might say infamous) five-level Kirkpatrick/Phillips model of learning evaluation is the most common method used. Ninety-two percent of respondents reported that they use at least Level 1, which simply measures a learner's satisfaction with a program. As the levels of evaluation get more strategic and bottom-line focused, however, the use of the model drops off dramatically at each subsequent level. Put simply: trainers often struggle with correlations between training initiatives and business results.

Many industry experts would recommend that organizations evaluate at all five levels while trimming the number of programs that are evaluated as they move up the levels. Instead, it appears that most organizations are evaluating at the first couple of levels exclusively, then dropping off completely before getting to the most value-affirming metrics.

The survey results also found that isolating training as a factor that affects bottom-line results works as an impediment to effective learning measurement. But the problems are often more basic than that. Many respondents said that metrics such as business results and ROI are sometimes just too complex and time-consuming to calculate. Let's face it: statistics probably wasn't the favorite class of most individuals in the training profession.

For perspective, I asked my longtime friend and unquestioned guru in the industry on training ROI, Jack Phillips, to share his thoughts on why this happens in so many organizations.

"Financial analysis and evaluation of effectiveness are not core competencies of most in the training profession," Phillips told me. He should know; there's little question he's talked to more trainers about ROI than anyone on earth.

"However, more training professionals these days have come around to the reality that tying learning to business metrics should be a core component of a program from the early design stages," Phillips said. "I think the work being done today in this field is better than ever before, but clearly there's still a long way to go for effective evaluation to be commonplace."

Certainly the money has been there to support this. Depending on their size, companies can spend anywhere from a few thousand dollars to tens of millions of dollars on training their workforces annually, with about 5.5% of that being spent on evaluation. That's an astounding amount of money being spent on something that most would agree isn't very effective. Keep in mind, much of that spending is on the very basic "How was the program?" Level 1 type of question and analysis.

But by focusing that investment on business impact, companies can have greater success. Our study found that in most cases, using more of the Kirkpatrick/Phillips levels is associated with more success in the area of learning metrics. The exception to this rule is at Level 1 (i.e., where most of the money and focus is now).

More importantly, study results also show that firms using more of the Kirkpatrick/Phillips levels have better market performance. In short, if you are actively measuring how learning affects the bottom-line, chances are your culture is one that encourages measuring the impact and effectiveness of various initiatives, and the affect that quality training can have on an organization is often quite significant.

The Value of Evaluation: Making Training Evaluations More Effective is available to i4cp members through our website. The report is also available to non-i4cp members through the ASTD website.

i4cp's 4-Part Recommendation:

  1. Trainers should solicit feedback from line managers to gain a full understanding of what will make various business units more successful and design programs that will close the knowledge gaps in each department.
  2. When designing training programs, consider strategy-relevant learning measurements from the beginning. After-the-fact measures don't track learning's impact and limit the levels of evaluation that follow training.
  3. Stop spending money on Kirkpatrick/Phillips Level 1 evaluations. Focus evaluation efforts on Levels 3 and above.
  4. Use the more strategic evaluation measurements to promote training effectiveness. Senior management will never understand the impact training is having on the business unless they hear it often and have it presented in the same financial metrics that the company uses to measure its overall success.
Kevin Oakes
Kevin is the CEO and co-founder of i4cp. He is a world-recognized thought leader on the topics of corporate culture, the future of work, and learning, and is the author of the bestselling book Culture Renovation: 18 Leadership Actions to Build an Unshakeable Company.