How to Measure Learning ROI

Watch Video


Hello and welcome to today’s training industry webinar How to Measure Learning ROI sponsored by Axonify. Now to get us started I’d like to go ahead and introduce you to today’s dynamic speaker, Carol Leaman, president and CEO of Axonify, a disruptor in the corporate learning space, an innovator behind the world’s first employee knowledge platform, proven to increase employee knowledge and performance necessary for achieving targeted business results.

Prior to Axonify Carol was the CEO of PostRank Inc., a social engagement analytics company that she sold to Google in June 2011. She also held CEO positions at several other technology firms including RSS Solutions and Fakespace Systems. Carol is a frequent speaker, a regular contributor to Fortune Magazine and and a well respected thought leader whose articles appear in various learning, business, and technology publications. She also sits on the boards of many organizations, both charitable and for-profit, and advises a variety of high tech firms in Canada’s technology triangle. I know she’s got tons of insightful information, so without any further ado, Carol, the stage is yours.


Carol Leaman: 

Thank you very much Amanda for that very kind introduction and welcome everybody. I was excited to see that over 80% of you have attended very few webinars or no webinars, so hopefully what we’ll convey today you will find of use and you’ll be able to take away a few gold nuggets that you can employ back in your own organizations.

In terms of agenda I’m going to briefly cover over the next 50 minutes or so before we leave some time for questions, just the state of the nation today, where we are in terms of learning measurement, what most organizations are doing, and then we’re going to talk about three key influencers that are in fact in the world today driving better measurement and really making it available to everybody who wants to start down that path.

Then we’re going to, I’ll give you a few three very specific examples of some organizations that are using various techniques to do better learning measurement … business outcomes, and talk just very briefly about what’s next. With that we’ll start.

I think everybody is very familiar with … And I’m just having trouble advancing the slide for some reason.



Try again.


Carol Leaman: 

All right. No. There we go. Thanks Amanda.

I think everybody as I was saying is very familiar with the Kirkpatrick model. It’s been that staple of learning measurement for so many years and really has been the foundation of the way learning and development leaders have thought about the learning that they’re doing and creating and delivering within their organizations and the efficacy of that learning.

I’m not going to go into these four levels in any detail. I know you’re all very familiar with them. But I will say that the holy grail of learning for many, many years has been level four, how do you go beyond reaction and evaluation, basic measures of learning to really knowing what learning is getting you from a business outcome perspective, and really it comes down to how do you measure the impact of learning on behavior change that you’re trying to drive which are the things that in fact get your organization business results.

We’ve been talking about that level four as the holy grail for many years, and unfortunately the ability of organizations to reach level four has really been hampered by lack of technology. We’re going to talk about that a little bit more in a few moments.



Click right on that slide Carol.


Carol Leaman:

I am right on it.



There we go.


Carol Leaman: 

There we go. Okay. Thanks Amanda.

The next thing I wanted to tie to the Kirkpatrick model was something that was published by Bersin back in May of this year. So earlier this year Bersin did a fairly extensive survey of organizations and where they were in terms of learning measurement and came up with what they called the HILO model. HILO stands for High Impact Learning Organizations and it was a study that they did to really figure out where are organizations generally in this journey of learning measurement. What they discovered, and you see the different percentages there attached to each of those levels, what they discovered was that most organizations are in fact still not very far along in that measurement journey.

Level one is really what they called episodic or programmatic organizations. These are the most traditional organizations where they use courses and programs really as the solution to learning. They are not that far along in terms of thinking outside of their internal organization, so they’re really still very much internally focused. As you see a full 35% of organizations that they surveyed are in this state as far as measurement goes, so very much aligned with that Kirkpatrick level one going into level two.

The second level of the HILO model, those organizations, so almost 40% of them are what they coined as responsive and contextualized. These organizations have moved out of the curve somewhat and have really put learners at the center of learning, so that learner centric model that many of us have been hearing about for the last couple of years. They do use analytics at a basic level to provide more personalized experiences for the learners, a little bit more customized experiences, and that’s about as far along as those organizations are in today’s world.

Level three is something that they called continuous and empowering. About 20% of the organizations surveyed have reached this level, and really what they are trying to do is affect behavior through empowerment and affect learning through empowerment. These companies at level three are really just starting to dip their toes in data and particularly performance data, and really what the major evolution away from level two has been is that rather than focusing just on content, they’re creating an environment where employees can start to take charge of their own development. Even further along in terms of focusing on the learner.

Finally, level four, which is essentially very few organizations, so only 6% you see here. These organizations are what they called anticipatory organizations. These organizations are doing things, using data that is allowing them to predict a higher order business outcome, and they’re really just supporting employees to grow and adapt within their organizations. Unlike the level one organizations, level four organizations are really looking for all manner of data outside themselves, so very externally focused, not just internally focused, and using as much data as they can possibly get their hands on to inform learning and the empowerment of the associate, the employee, the people that you are really trying to affect behavior change around that comes from learning. Again, only 6% of organizations are really there, and that leaves a huge opportunity for most companies to continue to move up this curve.

As you can see from those percentages clearly we can do a whole lot better as learning and development professionals. There is a big opportunity to move the ball up the field and really start to measure things that matter.

In essence when you summarize why has this ball been able to move up the field, what’s really changed over the last 5, maximum 10 years, that’s really changed the game for learning and development inside corporate enterprise. First of all, increasing demands from the business to deliver results really have come back to learning and development. It’s become much more prevalent to have business leaders, line of business owners be looking for very specific and direct support from learning and development to help them achieve their business outcomes. That demand, just that reality is increasingly prevalent.

The second thing is that we’ve had some pretty key advancements in brain science over the last decade that have been used to inform learning and how learners in fact remember most effectively the information that you’re trying to convey.

Then the third thing are advancements in technology. I think all of us know that the use of mobile devices 15 years ago was not very prevalent, specifically at work, it was really nowhere, and so advancements in technology generally have allowed a lot of things to happen. I’m going to delve into each of those a little bit more next.

The first one, increasing demands from the business, really this chorus of I’m under so much pressure to drive revenue or reduce expenses in terms of being a line of business owner and then turning to L&D has just been growing and growing with time. We did a survey last year where we discovered that in fact there is a significant disconnect between what the business wants and what L&D is delivering. In surveying hundreds of line of business leaders and learning and development professionals, we really discovered that what the business wants and needs today is very, very specific help in driving their business objectives. What they understand is that employees who are not engaged in delivering results, changing behavior, and supporting those business objectives don’t do the right things to enable the achievement of the objectives.

They’re also looking for more direct connection with their frontline workforce where you have disparate, dispersed employees that are not necessarily tied to a desk, which is a very, very large proportion of workers globally. Most in fact don’t work at a desk. It’s very hard to convey information, understand what people know when they are remote and deskless.

What the employee wants on the other hand that came out of the survey is learning to be very fun and engaging. You see 85% of learners shows fun and engaging as a key desire to have. They also want it to be personalized and relevant. Nobody wants to learn things that are that one size fits all content that don’t really help them be better employees. Just like we have Google today, we need it to be accessible anywhere anytime in terms of our ability to learn what we need to know.

Unfortunately what we’ve done and as you see most organizations are still doing is traditional, it’s checking the box, it is that static one and done approach with no reinforcement ongoing and we just expect people to remember, and it really is not an experience built for the modern employee.

What happens? Employees don’t do the right things. They don’t know why and they don’t know what to fix. So as learning and development we just create more content thinking that that’s the way that we’re going to help solve business problems. And it isn’t.

The second thing in terms of that collision are advancements in brain science that have really exposed or allowed to bubble up micro learning as a very, very effective way to drive knowledge and change behavior in the workplace. There are three key concepts I’m going to touch on really briefly here that go to the delivery of information and what we can measure as learning and development professional that really then impact the business. The first brain science concept is something called spaced repetition. There’s a concept called confidence based assessment. And then retrieval practice.

Spaced repetition many of you will know as that Hermann Ebbinghaus’ discovery from over 100 years ago that really says in a nutshell that it requires … a learning requires information to be repeated or reinforced and the most effective way to create long-term memory is to space the repetitions appropriately. I won’t get into exactly how it works. I would encourage you to do some research on it, but Hermann Ebbinghaus’ discovery was that with three to five repetitions over a period of about 30 days and with the appropriate spacing, you can drive long-term retention of information to greater than 90%. This is, flies in the face of one and done concept which unfortunately results in memory degradation virtually instantly after a training event. As you can see from that dotted line on the slide, the average human being will remember 5-7% of what they learned 30 days earlier when it is not appropriately reinforced. That concept is now being employed in learning and allowing learning leaders to measure what’s being delivered and when to the learner and what their level of retention of that information is.

The second concept retrieval practice is a form of study, and what we now know with many, many studies around this and these are scientific studies in labs with people but also in real life now in the corporate environment, the concept is simple, it is that the best way to get a human being to remember anything is to have that individual retrieve the answer from their own brain and regurgitate that answer outwardly.

Rather than simply telling an employee something and hoping they remember it at that point of need, when they need to use it, by getting the employee to tell you the correct response to a procedure, to a particular policy, to something conceptual, skills, soft skills, by allowing the employee repeated retrieval of that information you can drive memory and retention much more effectively than simply telling them something.

The third and final concept is confidence based assessment. Really again, many studies around this, really what this gets to is that an individual human being’s level of knowledge is in fact tied to their level of confidence and vice-versa. What we’re all trying to do is to get employees to act smartly in the workplace and be masters of the information. So where you can tie the individual’s actual knowledge state to their level of confidence in that knowledge and measure it, you can drive all of your learners or the vast majority into something called mastery or smart action. Again, measuring what they know and what their confidence in that knowledge is can in fact change behavior.

The third and final thing is the advancements in technology. As I said earlier it goes without saying that we now all have access anywhere anytime. There really is no reason why information can’t be personalized to each of us. Technology has allowed real-time capture of information and analysis of that and instant feedback, and in the last couple of years this concept of machine learning and artificial intelligence have really started to be applied to learning, to pull all kinds of data together, measure many things, and then start down that path of impacting business results.

The new world, it really is possible to start to gather information at the individual learner level. It’s no longer just having people in a classroom conveying information via a fire hose and not really being able to measure anything beyond reaction and a couple of other advance things. What we can now do is measure interaction online and create that very rich and powerful data set. In doing that, we can link individual knowledge to what they’re actually doing on the job. We now have the ability to tie those two things together very specifically. And when you can feed that data into an advanced statistical model and apply machine learning, you can get some very, very interesting outcomes from it.

That’s the goal, is to tie all of those things and in applying machine learning and regression understand how knowledge, confidence, behavior, performance actually tie to your business outcomes. That holy grail that I mentioned in the beginning, that Kirkpatrick level four and in the Bersin model level four, those few organizations that are down this path, it’s in fact now possible through data gathering and applying machine learning and statistical models to start to look at how does what an individual knows tie to the business.

It really is a continuous cycle of optimization. It isn’t just a one time event. With technology today you can drive individual knowledge, individual interactions with knowledge, observe behaviors and capture those as tied to the individual or a team, apply the machine learning, and then tie those items to the business outcome, and when I get to the examples of the … and you’re going to see how this works in practice. The best part of this is not just the business outcomes, but what learning and development professionals can also do is then focused on what is the content, understand what is the content that is in fact driving individual performance, and so some other really interesting outcomes that result from the ability to gather this data and get right down to the most granular level.

When I talk about individual measurement here is an example of an actual learner’s learning path. What you’re seeing here, what I mentioned a few minutes ago confidence tied to knowledge, you’re seeing here in those blue dots different topics that … learner has been delivered that they’ve completed and you can see on the left hand side, the confident side of that graph what the level of confidence in each of those topic areas is, and on the lower end of the graph their success, their actual knowledge in those topic areas.

Because of the way we can gather data today, you can see down to the level of the individual and those topics what they know and don’t know what that level of confidence is and then adjust your learning programs and in fact your individual coaching of the individual to in fact move them up into that top right quadrant which is mastery. It’s this dispersion of data and this ability to get right down to the granular level that is really allowing learning in development leaders to inform the business and aid the business in their business objectives.

The next part of that is building these individual knowledge profiles. As you’re gathering the data around what people know and don’t know, you have a sense of what is their knowledge state. And if you do it via questioning you can get right down to the level of the key knowledge point, not just a general topic area but if you use key learning points and create questions, you can understand right down to that granular level what the individual, what they know and don’t know.

When you tie that to behavior observations, and many organizations are in fact doing this now, you can therefore understand what do they know and what in fact are they doing on the job. So is your learning having any impact on behavior in the workplace? When you then ingest historical business outcomes, so for example let’s just take a quick safety example, you are looking to reduce safety incidents and of a particular type. You know historically what those have been say in a specific location in your business. If you also know how people have been doing specific tasks because you’ve recorded them and you tie that back to their individual knowledge profile, you get some really, really interesting information. That allows machine learning to start to draw correlations of all of those billions of data points and then get into some really interesting things like predictive analytics.

Having the individual knowledge tied to the behavior observations, understanding what those business outcomes are, really allows learning and development to partner with the business to start to understand the full impact of what needs to change in order to achieve the business outcomes. You get that insight and you get better performance and business results.

We’re going to look at an example now. This is a very large global organization. As I mentioned a moment ago, this particular organization had an issue with safety. We’re going to just walk through a really quick example of what they looked at in order to determine really how they were going to have L&D help partner with the business and help the business change the business outcome which were those safety incidents.

The first thing they did was define the safety incidents that they were looking to achieve. In this particular business it’s just the case that they are going to have safety incidents throughout the year. So they set targets for those and hopefully meet those targets and employ many different things in order to achieve them. The first thing they wanted to look at was what is our expectation with respect to safety incidents, so what is the business outcome we’re looking to achieve?

In this particular case they had a lot of ladder safety problems. What they were looking for and terms of behavior was a variety of things that informed ladder safety accidents, this being one, was the ladder climbed in the proper way. So what they did was they recorded behavior observations randomly. They assigned people in their facility to walk around and have a look at whether people were in fact using ladders correctly and recording very quickly yes or no. In doing that and looking at those behavior observations with specific individuals they were able then to gather data, very granular data that they could tie together.

They also then looked at what do we need people to know in order to do the right thing? To have the behavior change we’re expecting, what is the knowledge people need to have? In this particular case they gained knowledge on the proper techniques to climb a ladder. So how do we teach them the way to do that? Then based on what we think they need to know, what is the content?

This is an example of a question that this organization used to drive that knowledge in terms of what do they need to know. They would ask very specific questions of their learners on the topic of ladder safety and record what they knew or what they didn’t know in that topic area. As you can see the learner gets instant feedback so that they can then better remediate and get in terms of the repetition a better opportunity to get it correct the second time.

In terms of measuring, the organization started to measure topic by topic a baseline knowledge in each of the topic areas that were relevant to the business outcomes and how that knowledge was growing in each topic area person by person. It was based on all of that that they went back and then designed their content to reinforce all of those steps along the way to make sure that they were doing the right things.

Historically as learning and development professionals we’ve often started with the content, we always want to impact the business outcomes, but there has been a lot missing in the middle in terms of the ability to really understand what people know and track what they’re doing. So we’ve really gone in the area of hope in terms of creating content.



Your audio. Are you there?


Carol Leaman: 




Okay. There we go.


Carol Leaman: 

Oh, sorry. Hopefully we didn’t miss too much there. Essentially what we want to get down to as L&D professionals, is not just create a lot of content. We want to create great content. We want to create the content that is in fact having an impact on what people know and do on the job to achieve the business outcomes at the end of the day for those line of business leaders. When you can gather all of that information along the way and apply that machine learning, you can actually understand whether the content you’ve created is having an impact or not.

The organization I just spoke about discovered they had created a lot of content in the area of safety. What they discovered was that their learners had a high degree of knowledge around ladder safety and so they were getting the questions correct and incorrect, or correct in most cases. However, they weren’t observing the right behaviors change and they weren’t impacting the business results. Through the application of that machine learning, they discovered that in fact content they created was not the right content to impact that particular business outcome. So they went back and relooked at the content as it related to ladder safety and recreated the questions to be much more effective in terms of driving behavior change.

Having that granular level of data all the way along, really helps you pinpoint what is really going on with your learner population. Is it a problem with content? Is it a problem with knowledge? We’ve had organizations who have discovered that there are literacy issues for example with their learner populations. Is it something altogether different? Do you have behavior issues in a particular location for management reasons? Having a vast amount of data, which is entirely possible today, gives you the opportunity to measure and then pinpoint and be most effective in terms of where you put your learning dollars and your learning effort.

What’s next in terms of the learning equation? Well, Kirkpatrick stopped at level four. Now the world has evolved, and with all of this data that we can collect from reaction, what people learned, measuring behavior change, and tying it to specific business outcomes, we’re now in a world where we can start to predict human behavior in fact based on all of those previous data points collected together and regressed against these statistical models. That gives that holy grail finally to learning and development so that they can start to really, really impact business outcomes and partner with a line of business to achieve, whether it’s revenue growth or expense reduction, whatever the case may be in terms of your most important strategic imperatives.

You can start to look at where are we down that curve. Are we in fact having an impact on the business? Where are the opportunities to improve? And look at who in terms of your learner population is doing what? What are the key things that are driving that business outcome? Is it participation with learning? Is it some extra training that people are doing? Is it specific topic areas? You can start to get very, very granular in terms of understanding how to most effectively drive that revenue or reduce expenses.

Here are a few other very specific examples of what some organizations have done and done in a way that has got them to that fifth level of predictive analytics and using all of this data to measure every single thing that happens in the learning journey, and then start to adjust and optimize and drive those business impacts.

This particular company is as you see here Global Logistics Company that has tens of thousands of employees around the globe. They had a very strategic focus on decreasing shipping errors. When I say shipping errors, these are errors with pricing of packages that are being shipped around the world. Something I learned in this particular example is that when we have a shipping company pick up a package in our offices, we self … So as organizations we self-report what the weight on the packages is, and many companies unfortunately under report that weight. They have inspectors who inspect those packages randomly to try to figure out where things have been under weighted so they can reprice them and recapture some of that lost revenue. There are particular things that these inspectors need to look for.

By teaching the inspectors how to look for things appropriately, what are the signs and signals in order to choose the most I guess the highest probability items that should probably be priced differently, they used these concepts of spaced repetition, retrieval practice, confidence based assessment, specific knowledge delivered to the individual for their job title and adjusted based on what they knew or didn’t know, and measured all along the way what their levels of knowledge were. Then what they did on the job in terms of choosing those packages and getting them reweighed. And very, very quickly they found that the inspectors who were highly engaged with learning frequently throughout the week had extreme knowledge lists in topics directly relevant to driving that revenue and a significant impact down to the level of the individual in terms of the additional revenue that they were able to get.

That allowed them to start to use the data points of what people knew and didn’t know, how frequently they were participating with learning, things like that to start to predict their increase in revenue based on the other behaviors exhibited by those learners. So a very significant drive in revenue in that example.

The second example is a very large retailer and they were focused on increasing overall sales in their stores. They had content in fact all over the map, lots and lots of content, focused on onboarding company values and they had been able to predict sales targets based on employee knowledge that crosses many, many of those topics. Whether it was product information, how to greet customers appropriately, all of the things that are going to impact a happy customer and get that customer increase their basket size, they were able to pinpoint again and predict store by store revenue based on the learning activities and results that their associates were performing day in and day out.

Then finally a large telecom company decided to try this learning method and data gathering across their sales team in their retail stores as well, and they had a particular objective of increasing sales of peripheral devices, the things that we buy to go along with our phones, so earbuds, phone cases, things that in fact generate a significant amount of margin for those companies. So what they were able to do was start to predict sales of those products that were based on knowledge growth in the product topic areas and again participation of the individuals in learning and their success in those topic areas. Not only were the associates selling more of those products, so the behavior change was quite evident as tied to their levels of participation. They were able to then predict which stores and which products were going to be sold the most.

Again, having the granular data has allowed organizations to really get specific in terms of the targeted business result they’re trying to achieve and then how to optimize and hone in on the key variables that are going to drive that as far as learning goes.

In conclusion, there are three critical learning metrics. They are that individualized knowledge that we now have the ability to collect. Technology has enabled us to do that. Where you can tie that knowledge to the individual’s level of confidence, you can also drive the business outcome. Your goal here is to increase confidence in that knowledge.

What we know is that employees who are highly confident in knowledge but in fact don’t know very much are going to be risk takers in your business, and you need to understand what that level of knowledge is and how confident they feel about it to change that behavior. Similarly, you may have employees who have high degrees of knowledge, very little confidence in that knowledge. Those employees tend not to exhibit the right behaviors. They’ll hang back and be doubters. They don’t act in the ways that you need them to. So understanding confidence even when the employee self-asses their confidence gives you loads of data with which to impact the business outcome.

Then finally workplace application, being able to measure what are people doing that we want them to do, what are things they’re doing incorrectly that we need to help them with or coach them around. Having those observations captured and tied to confidence and tied to actual knowledge give you those three building blocks that allow you then to start applying the statistical models and machine learning and give you lots and lots of additional information to hone in on your programs.

In fact, I would encourage all of the listeners to look up something called the Phillips evaluation model. It goes a step beyond and really looks at the next level which is the return on investment of learning. So essentially did the monetary value of the results that you got from learning exceed the cost of the programs that you’re trying to create and deliver? A very interesting read for those of you who have an interest in that.

I’ll end with if you are interested in learning more, we’re more than happy to talk to you about how you can achieve this in an automated software way. I’ll stop there and turn it back over to you Amanda and see if we have any questions.



Yes, we have tons of questions. Let’s start out with the one that’s been asked most often which is what is machine learning?


Carol Leaman:

Great question. I apologize. I should’ve defined that earlier. Machine learning is something that’s emerged in the last, I’m going to say two years, in the last year most particularly. Machine learning are lots of companies that sell business intelligence tools that allow you to hook your data, lots of disparate data into these business intelligence tools and then they apply statistical models, regression analysis to all of your data points and they extract automatically for you what are the things that are impacting other variables most specifically and most importantly.

I’m just going to give you a very specific example of that. We at Axonify collect millions and millions of data points every single day that are around on our particular platform what do people know and don’t know question by questions, what games are they playing because they have an opportunity to gamify the experience, what behaviors are they exhibiting. We have an organization that is capturing a million behavior observations a month on the platform that we tie back to what the people knew or didn’t know.

We take all of that data, it’s a massive database, plug it into a business intelligent tool and what we have discovered is that the biggest impact on participation of learning has to do with, or sorry, got it backwards, that in fact the most learners and participation will be had if you allow a learner to use a mobile device. We also look at what device are they accessing with.

We just matched all of the data into the business intelligence tool. It has a built in statistical model that can extract those things that are most impactful. We encourage our customers then to allow obviously people to use mobile devices for learning, because what we know is if you do that you will get the highest level of participation.

We also know for example which games also impact the most participation if you enable certain games. All kinds of things like that that you can extract, you don’t have to build these statical models yourself. There are a lot of business intelligence tools out there that you can use.



Excellent. This question comes from Malia. She says, “Do you have any soft skills examples and how they tie in to motivation and competence? This is very procedural tech type training examples. Anything on leadership competency beyond knowledge acquisition?”


Carol Leaman: 

Yeah. There are lots of organizations that use exactly the same techniques to deliver soft skills training, so leadership training, harassment discrimination type training really runs the gamut, and what it comes down to is people do what they know. If you can drive knowledge into somebody’s head very effectively, their chances of actually doing the right things regardless of whether it’s driving a fork lift or managing people effectively, it works exactly the same way.

In terms of measuring the outcomes in a soft skills scenario, it really does require the behavior observation. You need, whether it’s a supervisor or 360 type of situation, you need to define what the appropriate behaviors are and have somebody assigned to assess whether those behaviors are taking place. In terms of the learning and the effectiveness of learning it is as effective for soft skills as hard skills, and then really measurement of the soft skills comes down to behavior observations to understand whether those things are in fact having the business impact.



All right. This is a two-part question. The first part comes from Scott. He says, “In the examples you gave, did you tie the change in behavior to reduction in costs or increases in revenue?” Let’s answer that one first and then I’ll go to the next one.


Carol Leaman: 

Yeah, absolutely. The behavior observations, so that example of capturing a million behavior observations a month, that’s in a warehouse, and the behavior observations tied back to what the individuals knew or did not know automatically adjusted the content that was being delivered to them, and that organization has had a greater than 50% reduction of safety incidents which translates to as you can imagine tens of millions of dollars. This has been proven over and over again, that’s just the most dramatic example, but lots and lots of times that behavior tied to knowledge and then having the appropriate knowledge reinforced has a very significant impact on the business outcomes you’re targeting.



Question was from Sharine who said, “I would like to add to Scott’s question, as we are an organization where almost everyone has an individual duty so there is no one type of employee. Any tips for applying when no one needs the same skills?”


Carol Leaman: 

That’s a bit trickier. I’m imagining that must be a smaller organization. You have to take … Certainly technology today has allowed you and can allow you, I mean Axonify is one example of that, you can deliver very specific content to each individual, so it doesn’t matter how many you have. It just comes down to really having the measurement on the back end, so specifically what do you need people to know, and unfortunately and through lack of having tools available L8D has taken the approach historically we’re just going to fire those people with absolutely everything we think somebody might need to know possibly to do a job. Instead of focusing on what are the specific things they need to know about the business.

Where you have people doing very different jobs you do have to train them at a very individual level, and then observe, coach, mentor them to a very individual level. But technology allows you to do and create those very individualized learning programs that you can repurpose as people in the business change. So technology is out there to help you I guess is what I’m trying to say.



All right. This next question comes from July. She says, “What if there are not enough resources to conduct this type of assessment or measurement observation? What do you suggest? Are there any shortcuts? Any other tools?”


Carol Leaman:

Yeah, so I think a very common problem. The transition from traditional means to the modern learning approach, which is really what this is, it is a transition. What I would suggest is you start really defining what that path looks like. It’s going to be impossible to just bite it all off day one and wholesale change everything. For us it starts with pick one objective. We want to increase sales of this, or we want to reduce this expense area. Now really specifically what are the things we need people to do? What are the 10 behaviors? And define what those are, and then target certain individuals, start small with what are the things, the key things they need to know to change those behaviors and have them doing those 10 right things. Almost do a little experiment to start to build your own database if you will of information, and you’ll get a lot of intuition out of that from the outset.

I would say start small. Don’t pick 10 business imperatives. Don’t try to change it for 1,000 people. Pick a group where you can run a little internal experiment. Often you’ll find a champion for that internally who’s interested in moving to the more modern approach and starting to gather that data. You don’t have to go wholesale day one.



All right. This next question I’ve gotten from a bunch of different people today, but essentially what guidance would you give when someone has business results but it’s having a hard time attributing those results solely to the training?


Carol Leaman: 

That’s a really, really good question. We get that a lot. How do we know that this is the thing that changed behavior and got the business outcome? Really that’s where the machine learning comes in. Machine learning is not judgmental and using intuition. Machine learning is where you can get really specific about what is in fact impacting the business outcome. To the extent you can even do that in a smaller experimental way, it is statistical and it is proof, and what it does is it takes away all of the cloudy stuff around, well, so as an example in safety, well, we employ 15 different safety initiatives. How do we know that this one thing is the thing that’s reducing errors and/or incidents? Again, machine learning can prove specifically what are the things that are having an impact and what are the things that aren’t, and dispel I guess a lot of the noise and actually get you focused.

It is a case, back to the earlier question, we’ve had learning and development folks stop doing things altogether, stop training on specific things altogether because are having no impact, that content is having no impact on the business outcomes. That’s what technology has allowed us to do today, is to get real and specific about what is in fact impacting the business.



All right. I’ve got this question from Martial who says, “How does predictive analytics account for external factors like sales promotions, new competitors, et cetera, that can skew results beyond what learning can address?”


Carol Leaman:

Yes, so the more of those data points that you can pull in to the business intelligence tool, the more you can discern which things are having the most impact. The bigger the data set, the more accurate the data is. But even at that, and we’ve discovered this many times over, you’d be surprised, we have been surprised at the gigantic impact that true knowledge of an individual has on the business outcome. People guess when they don’t know. People don’t necessarily do anything when they don’t know. They don’t do the right things. Being able to isolate all the factors is ideal. But even in an ideal world nobody has all of those factors to isolate and you can using the statistical models getting very clear picture of the things that you are measuring and their impact on the outcome.



All right. Well, that is all the time we have for today’s event. Carol, thank you so much for being here and sharing all of your wisdom. Something tells me that your team at Axonify will be getting back at all of these unanswered questions we have in our Q&A pod, but you shared ton of information, despite some of those unanswered questions, so thank you so much for taking time out of your day to be here with us.


Carol Leaman: 

Thank you Amanda.