Skip to main content

A Guide to Evaluating Course Performance with the Kirkpatrick Model

Was it worth it? 

Finally, after months of effort, your course is complete and ready to be published. Once the high-fives have finished you may wonder what is next. Evaluating the effectiveness of a course is an essential part of a course life cycle. There are many questions a new evaluator may ask. How do you evaluate the effectiveness of the course? Does it matter how the course is received by instructors and learners? Does the course provide the experience desired by both the instructors and learners? Does the return provided by the course justify the cost and effort of creating and delivering the course?

This post will discuss a model used to evaluate the effectiveness of course content and provide quantitative and qualitative metrics to measure the various aspects of a course. The evaluation can then be used to inform content creators about possible ways to iterate over the course and make appropriate modifications to continue to improve the course experience. It is essential to break down a course to its core components and evaluate each component individually. While there are many evaluation models, Kirkpatrick’s Model is often used and is considered one of the best approaches to course evaluation.

The Kirkpatrick Model

There are several learning evaluation models one can consider when evaluating course content. Popular models include models such as Kirkpatrick’s Model, Kaufman’s Model, Brinkerhoff’s Model, Anderson’s Model, and Phillip’s Model. The model we will discuss is Kirkpatrick’s Model.

Kirkpatricks model of evaluation

 
The Kirkpatrick Model is built upon multiple components such as Reaction, Learning, Behavior (sometimes called “Behavior Changes”), Results (sometimes called “Business Impact”), and Return on Investment. Each component or level of the model evaluates the course through a different lens. Each component can provide valuable feedback to content creators. First, we will start at the base of the model, Reaction.

Reaction

The first component in Kirkpatrick’s Model is Reaction. Reaction is how the learners and instructors responded to the training or course. An example of this is surveying both learners and instructors to discover what parts of the course went well and what parts went poorly. Are the learners and instructors satisfied? Is there specific feedback that learners or instructors can provide? Would the learners and instructors recommend this course to someone else? This metric is often called a Net Promoter Score. Knowing how the course was received can inform the content creators if they met the expectation of the audience that will interact closely with the course.

Once Reaction is evaluated we can move to our next component, Learning.

Learning

The next component of the model is Learning. The Learning component of Kirkpatrick's Model asks if the individual portions of the course are producing the learning outcomes expected from the course. Do the assessments perform as expected? Do the labs perform as expected? The outcomes of the assessments and labs are often then used to measure the effectiveness of different facets of the course. This component also allows the evaluator to look deeply at the individual content, assessments, items, and options to ensure that they are functioning correctly and providing the information needed by the learner. Individual portions are often flagged during this stage so content creators can revisit both content and assessments that learners are struggling to master. These modifications can be evaluated over time to ensure updates to the course are having a positive effect.

This brings us to the next component of the model, which determines if the course changed the learner’s behavior.

Behavior

Behavior is the next key component in Kirkpatrick’s Model. Did the learning that was demonstrated by the previous metrics result in behavior change for the learners? This may be measured by a follow-up survey for both the learner and then subsequent instructors or managers of the learner. To what extent has this course changed the way the learner interacts with subsequent courses or materials? If the behavior has not changed as expected, what parts of the content are failing expectations?

Once we are able to identify the changes in behavior, we can consider the results of that behavior.

Results

With the newly changed behaviors provided by the course, what are the measurable outcomes for the learning content? Can we establish not just correlation of the outcomes but causation? Has the newly learned content and changed behaviors provided desired outcomes? This component can also be difficult to measure. Courses are often taken by learners without a clear idea of how the usage of this course can be quantified in the future. Having this component as a target for content creators allows content creators to mentally target what is considered a successful course. Even positive results from behavior and results may not justify the cost of creating, delivering, and maintaining the course.

After considering the results of the course, we can transition to the final portion of the model, Return on Investment.

Return on Investment

Return on Investment (ROI) is a newly added component to Kirkpatrick’s Model. When we sum up all the expenses required to create, deliver, take, and maintain the course, is the sum more or less than the value provided by the course to the organization? In other words, this component evaluates metrics to understand if the course contributed positively or negatively to both the financial bottom line as well as goals of the organization.

Metrics influencing the Kirkpatrick model

I briefly touched on several metrics that may be used to measure different components. What other metrics can go into a Kirkpatrick's Model? Often times an institution will already have many metrics available to plug into different components of the model. Some possible examples may be:

Quantitative

  1. Learner overall assessment scores.
  2. Learner scores on specific competency-based topics/skills.
  3. Number of help desk tickets an individual course may produce.
  4. How did the learner perform on additional labs/activities?  
  5.  Is the number of learners attending the course growing?
  6. Has the learner been able to be a more productive member of the organization?

Qualitative

  1. Survey from the instructor teaching the course. What are the course’s strengths and weaknesses as perceived by the instructor?
  2. Survey of learners taking the course.
    1. What is the attitude of the learner towards the course?
    2. Was the learner satisfied?
    3. Was the course relevant to the learner?
  3. Does the learner’s instructor, manager, coach or evaluator notice a change in behavior?
  4. Would learners and instructors recommend this course to others?

These metrics are by no means a definitive list one may use but are a starting point. Institutions often start with the data they currently have and then expand the metrics they are measuring over time. Which metrics are the most important? Since each organization is different, each metric score should be considered in the context of the organization that developed it. In other words, some organizations may put more emphasis on instructor survey results while others may put more emphasis on the student results due to the type of course. The complexity arises due to the fact that most institutions and courses have a unique set of goals. These unique goals allow us to pick and choose which metrics make more sense for that course. Each application of Kirkpatrick’s Model can be slightly different depending on the course created and the organization that is delivering it.

Now that you have the evaluation scores what can you do with them? The prime benefit of evaluating a course is identifying where there is room for improvement. Evaluating the same course over time allows you to determine if changes to the course have improved the outcomes. The value of evaluating the performance of a course is not to confirm your own bias about a course but to provide a baseline to compare feedback and updates for that course. This allows course content creators the ability to measure the effect on different updates to the course. It may also be tempting to compare the score of one course to another and then state one course is “better” or “worse”. That usage of the evaluation should be discouraged as differences in each course may be an apple to oranges comparison unless the dynamics of the courses are very similar and the same population is coming to both courses.

In conclusion, evaluating the effectiveness of a course is an essential process to ensure course iterations are improving the overall experience for both students and instructors. Using Kirkpatrick’s Model allows us to evaluate different facets of a course and provide appropriate recommendations to the content developers to ensure that the course continues to improve over its lifecycle.

Accelerate Time-To-Market with Seamless Integrations

Rob Nield

Rob Nield

Senior Software and Data Engineer
Robert Nield is a Data Architect at Unicon, Inc., a leading provider of education technology consulting and digital services. Rob joined Unicon in 2009 and has over 20 years of experience with professional software design and systems management, much of which has been spent working on a wide variety of projects for Fortune 100 companies. His experience focuses on developing, maintaining, and evaluating assessment and content authoring systems. He has a passion for educational data analysis and making sense of complex educational systems.