The learning capacity of an organization is of great importance. Startups are popping up like mushrooms and the technological possibilities are increasing. In order to keep your head above water as an organization and to keep innovating, the development of your employees is crucial. Standing still is going backwards. For this reason, organizations invest a lot of time and money in the development and implementation of learning interventions. For example e-learning modules, serious games, platforms, support in the workplace, workshops or a blend of these interventions.
But what effect does such an intervention have on the development of an employee? Do we see something of it in the workplace? And does this ultimately contribute to the organizational objectives? Measuring the learning effect is not something that is self-evident. Quite strange, when learning is so important and so much energy is put into it. As far as I am concerned, something has to change!
In this blog I will take you through what measuring the return of learning interventions means and I will share 5 practical tips.
What is learning efficiency? The word actually says it all: the return on learning. Ultimately, it is about the effect or result of one or more learning interventions.
There are different levels at which you can evaluate and measure the results of learning interventions. The KirkPatrick evaluation model with 4 levels is surely the best known method. Philips has added a 5th level ‘Return on Investment’ (ROI) to this. To this he adds the financial picture: What are the benefits of the intervention compared to the amount of money and time invested in it?
These levels are shown in the pyramid below:
5 practical tips
There are all kinds of arguments to make up why it is difficult to determine the return. And there is certainly some truth in that. Determining the return is complicated, especially because it is difficult to assign effects to a specific intervention. Yet there are a number of things that you can pick up relatively easily when implementing an intervention to get a better picture of the effects.
Tip 1: Start with a clear (performance) goal
Determining returns starts with establishing a goal. Ask yourself: what needs to change after employees have gone through the intervention? By formulating this objective SMART, you ensure that it is later possible to measure whether you have achieved the objective. In addition, this makes it possible to estimate in advance whether you have a positive business case: is the money or the time you put into it worth it?
Tip 2: Measure as soon as possible
If efficiency is measured, it is often only at the end of the process. That is a shame, because then you can no longer adjust anything to your solution. Therefore, measure as soon as possible. For example, by submitting a first prototype to the target group. Determine in advance what effect this prototype should have. This is the first step in achieving the greater goal. See, ask or measure if this is happening. For example, by gauging the reactions of the target group: do they become enthusiastic, do they understand what to do and is your idea practically feasible.
Tip 3: Collect relevant data
In the case of online learning interventions, data about the use also provides a picture of the ultimate effect. Interesting data is, for example: How active are employees on a social learning platform? What choices do employees make in an e-learning or game? How often is a resource consulted? On the other hand, do not collect data for the sake of collecting data. Here too, it is important to determine in advance what you want to get out of the data and what you will do with the outcome.
Tip 4: Do a before and after measurement
Determine the initial situation as accurately as possible. In some cases this is simple because the figures are already being measured: for example, customer satisfaction, sales figures or the number of security incidents. When it comes to awareness or attitude, that is more difficult. But a measurement is also possible here: ask people, for example, to rate statements on a scale from 1 to 10.
Tip 5: Work with a pilot group
By first rolling out an intervention in a pilot group, you ensure that you have comparison material. You see what the solution does to the specific group. You may be able to link data to this that only applies to this specific group. For example, are fewer mistakes made by the pilot group in department X? Or ask employees and / or managers about this in an interview. Did they notice a difference after the intervention was rolled out?
What experience do you have with measuring learning effect? Are there any tips you would like to add? I am curious about your experience and ideas!