Transcript

Measuring What Matters

Watch Video

Shum:

Okay, very good afternoon, good morning, and good evening ladies and gentlemen, wherever you are in the world. Thank you so much for taking the time to be with us for today’s special webinar, titled Measuring What Matters. Moving beyond basic LMS metrics to an outcome based measurement model. My name is Shum and I’m the Digital Marketing Manager here at Axonify, and I’m very excited to serve as your host and moderator for today’s session.

Before we get things underway, I just wanted to go over a few housekeeping details with everyone. First and foremost this webinar is being recorded, and you will have a copy of the recording as well as the slides sent to you as soon as the webinar is over. If at any time during the webinar you lose audio, or you’re experiencing any technical issues, please don’t worry. The recording will capture everything perfectly well. I will be moderating a Q&A period at the end of today’s session. However, if you have any questions throughout the session, please feel free to ask them, using the Q&A window or the chat window in the WEBEX console. If you wish to participate on Twitter, feel free to use the hashtag #LearningImpact in your tweets and mention Axonify.

With that being said folks, I would love to introduce you to today’s speakers. We have a very exciting lineup of speakers. First up, we’re gonna hear from Dani Johnson, who is Vice President of Learning and Development Research at Bersin by Deloitte. And then following Dani, we’re gonna hear from Axonify CEO Carol Leaman. Once again, we are very excited to have them both share their perspectives on the all important topic of learning measurement today. And Dani, we’re gonna start with you. Thanks again for being with us. The floor is now yours.

 

Dani Johnson: 

Thanks, Shum. I’m very happy to be here. We’ve done a couple of webinars with Axonify and I’m always thrilled to do it because I’ve found them to be really useful to me as well. It seems that Axonify had some great examples that tend to do a great job at bringing some of our pages to life. Today, kind of actually let me do a speech lessontation, as I am obligated to do. For those of you who don’t know who Bersin by Deloitte is, we are a global provider of leading practices, trends, and benchmarking research and talent management, learning, and strategic HR. We cover the human kind of field pretty broadly, and if any of you have any questions about you can join or what our offerings are, please feel free to contact me afterwards.

Today we wanna cover basically three things. The first one is, we recently completed the high impact learning organization study, some of you may be familiar with that longstanding study. And in that study, there was some great bits of data about analytics and measurements, and how that may be changing, particularly for those really mature organizations. And I’ll share some of that data by first talking about the model itself, giving you a sort of a thinkness for some of the other stuff that I’ll share. And then I’ll share four major measurement or analytics findings that we found, just by looking at the initial data. I think we’ll definitely be writing and speaking about this quite a bit more in the future, but these are the big, easy to see things that we found in that research. And then I’ll pass it off to Carol Leaman, who will share some examples in the case study as an organization that seems to be doing truly well.

So, just to start, let’s take a look at the new model. We published the new model in May of 2017, so it’s brand new. And I wanna start with this model because it’ll provide a grounding or a foundation for some of the other things, and it’ll help make sense of some of those things. So here’s the model. Like our other models, level one are the least mature type of organization, it’s found at the bottom of that graphic. And level four which is the most mature is found in green at the top of that model. So when we say an organization is mature, we mean that the organization is actually doing things that are predictive of higher business outcomes and more learner experienced outcomes than those that are not. And the higher up the model you go, the higher up your organization falls within this model, the more mature it is.

So, just very briefly, I wanna walk through each of the levels of this model. Level one on the bottom, we call the Episodic and Programmatic. Organizations that are most traditional usually fall here. They almost always default to things like courses and programs when they’re trying to effect change in their organization, and they focus pretty heavily on mentoring the knowledge and skills that employees need to do their jobs in their current role.

Level two organizations try to effect change through creating employee centric learning. Which means that a whole conversations that’s going out, going out there right now about putting the learner in the center of their own development and personalizing learning. That’s where these organizations fall. They focus on that learner experience first, and then they build the systems and processes that provide the opportunity for personalization of that experience.

Level three organizations effect change through empowerment. So between level two and level three, there seems to be sort of a break in organizations at level three, take a step back and say, “Wait. Maybe the best thing that I can do is just get out of the way of my learners in my organization,” and so they are all about empowering the individual to drive their own development. They focus on creating the conditions for the development, rather than on the content for development and that’s a huge mind shift between level three and level four, and the types of data and analytics that you use to do three versus level two is pretty stark.

And then finally, level four we named Anticipatory and Flow. These organizations effect change by actually anticipating and augmenting what employees need to do their jobs. They default to using work as development, and they focus on supporting employees, and their ability to evolve and adapt. So, it’s not just the knowledge and skills you need for today, but it’s the knowledge and skills that the organization is gonna need in the future. Not surprisingly, an organization’s approach to measurement and analytics is going to depend on their level of maturity. And I wanna quickly walk through what the approaches would look like at some of these different levels.

So level one, in an episodic and programmatic type of organization, there’s a big focus on data and reports that can easily be generated from an LMS, or from some other existing system. Usually the system of record. And there is a heavy focus on completing data and assessments, and sort of doing information. So, what are our learners doing? Which is what are they learning or what are they getting out of that, or how are they improving? So that’s level one.

At level two, data, and analytics gets a little bit more mature. The organization begins to gather data from several sources, so they’re not just looking at the LMS. Because they’re expanding their learning technology stack, they’re gathering information from several of those main mights, be dipping their toes into the information that’s available in the organization as well. And because they have more data, they’re providing more holistic reports that give a better picture of development and needs within the organization. So they have a better handle on what their learners actually need, and that is because of the big focus on, you know the learner centric aspect of level two. And their algorithms and their analytics are providing more personalized experience for the employees. So in level one, we’re worried about knowledge and skills across the entire organization, but if someone gets to level two, we want to actually use that data to create a personalized experience for the individual, and so that data is not just being corrected and reported. It’s actually being used in some way to help personalize those experiences.

Level three organizations, continuous and empowering as I mentioned. This is where L&D sort of gets out of the way. They’re using multiple types of data, learning and otherwise, so they’re really digging into what’s available in the organization and using the things that they have available in order to inform how they are approaching learning and development. They’re also using data more than level one or level two. Performance data and learning data so that between level two and level three they’re not just making the jump between push to pull. They’re also making the jump between, if there’s a connection between work and learning here, so how do we, what kinds of data help us understand if the work is getting better as well. And they use that data, performance and learning, to provide a really personalized employee driven experience for individuals. To get them the best information, the best data, and the best experiences that they possibly can.

And then at level four, there’s another big shift. The organization, the L&D departments, and those responsible for employee development are using all of the data that’s available to inform the employee development and the work improvement. So whereas at level three, they’re sort of dipping their toes in this, “Hey, our job is to improve performance,” at level four they fully embrace that. So they’re not just focusing on learning as the end game, they’re actually focusing on performance as the end game. There’s a focus on using data also, to determine appropriate risks that we are going to allow employees to make because we know that with risk taking, there’s lots of learning. And they also look at mistakes as data to effect some of where the organization’s moving, as well as what didn’t work, so they can correct some of the internal things that are happening as well.

So those are the four major buckets in how data and analytics at a very high level may appear. The biggest trend that we saw in the data is that at level one, L&D functions tends to be fairly internally focused. They focus on the things that will help them. They also focus in the things that are important to them. While as you move up that maturity model, L&D tends to look beyond their own data and becomes very externally focused. So they’re still collecting information on themselves, and how they can do things better, but they’re adding all kinds of different things in there as well to make sure that whatever they’re doing is beneficial to the organization and whatever data they’re providing is also beneficial to the organization. So we’ll talk a little bit more about that.

So, as we’ve been kind of crunching through this data base, this data base was very extensive. We had a great response, we had about 1200 usable responses, which is kind of amazing from four major regions in the world, roughly manager and L&D leaders in all different industries, so it was really broad, and it was a really rich data base. And from that data base, we saw sort of four initial findings from when it came to data and analytics. We wanna walk through those very quickly.

The first one, the first finding is that mature organizations use data for better design. This one seems like it would be an obvious thing for L&D organizations to do, but we found, as you can see by that chart, we found that the organizations that are doing this really well, those level four organizations were more than double at what level three organizations are doing when it comes to measuring how development experiences are being improved. So they’re not just measuring how well somebody writes it, they’re actually measuring their continuous improvement so that they can continue to do things that are going to help the organization.

The other thing, you know there is a little piece of data that I wanna share, is that level four organizations are much better at incorporating feedback into design of future development activities. So not only are they collecting information on how well they’re improving, they’re incorporating feedback into the design of future development activities. To the tune of 82% of these organizations are saying that they’re effective or very effective at doing this as only 55% of level three organizations are doing the same thing, so there seems to be a fairly big gap between level three and level four organizations when it comes to using data to improve the design of the things that L&D is actually doing.

Another piece of data that I wanted to share with you, the sort of kind of as an example of how data may be affecting design decisions of organizations, just take a look at this chart, and I know the working may be a little bit small, but it shows the percentage of companies utilizing certain types of technologies. If you take this for its face value and utilize only the benchmarks to determine what you as an organization are gonna do, you see that the top three most commonly used technologies are e-learning courses, LMSs, and webinars. Which, these are pretty standard things in the L&D world. If you’re using these things, you’re gonna feel pretty comfortable that you’re on the right track and sort of continue down the same road. But if you dig into the data a little bit, it shows a few more insights.

This chart, and apologies, these Xs didn’t come out quite where they should have, but this chart shows a different story. L&D or level four organizations are known to gather more data and use it to improve design as we saw on the previous slides. The chart on the right takes the same data and compares level one organizations that are most likely to use the common technology, and compares them to level four organizations who are most likely paying attention to the data in the organizations. And in this case, it shows that level four organizations are using search software as a learning technology. So, many times, 49 times more than level one organizations. They’re also dabbling in VR and AI, to the tune of 43 times and 34 times more than level one organizations. And the one that kind of completely surprised me is that they’re using mobile apps 32 times more than level one organizations when it comes to delivering methods or technologies used for development.

So data is literally changing the delivery methods that these organizations are using. They have a much better handle on what their learners want and what their learners need, and that they’re making those things a little bit better. One chart that I took out that I kind of wish I would have left in now is a chart that I used last week in a webinar, which showed the NPS, or the Net Promoter Scores of level one organizations versus level four organizations. Level one organizations, when you ask the managers that have a negative 33 Net Promoter Score, which is abyssal. But the real interesting thing is the difference between what L&D people think and what non-L&D people think at level one. There’s a 42% gap, whereas if you look at level four, there’s only a 13 point gap. Which tells us that level four organizations are much better at understanding the needs of the organization and meeting those, even if there’s still a slight gap than level one, two, or three organizations.

So the second finding that I wanted to go over really quickly, there we go, is that performance data is playing a larger role in employee development. So one of the biggest findings of that study was that learning is work and work is learning, and that organizations that are able to combine those two things are doing much better than organizations that are still keeping learning completely separate. And the chart on the screen shows several types of ways that performance data is collected. And I’ve broken it down by maturity level. You can see that level three and level four organizations are roughly the same on most of these, and they are higher than their level one and level two counterparts put together, when it comes to effective feedback. Such as, customer and client feedback, peer feedback, self evaluations, and then the use of talent and work management systems to get some of that performance data.

On average, more mature organizations collected more performance data from more sources, and they collected it more often than less mature organizations. And this matters to L&D because their new model says that you know, that learning is work and work is learning, and those organizations that integrate the two perform better. And so our job is already, is not just to collect the kinds of data that we’ve been collecting in the past, which are you know how much data it takes to create this course and how many people liked it. Our job is expanding to include different types of data and performance data is one of those big loads that we found. The end goal of L&D is no longer learning per se, it’s performance, and the types of data that we collect will help us improve performance. Not just ensure that people learn.

The third finding, is that mature organizations are using a wider variety of data. I found this data particularly interesting. We asked L&D professionals what types of data within the organization they’re paying attention to, and this is how the data came out. You’ll see that worker feedback is first. Line manager feed back is second, and on down the line to the automated data collection and then analysis at three percent down there at the bottom. When I first looked at this, this wasn’t really surprising at all. It’s because of our history and because we know kind of what we collect for worker feedback and line manager feedback and that these are things that L&D organizations would regularly collect. I think what’s interesting though, is when you look at again, level four versus level one, level four organizations are paying more attention to data gathered outside of L&D than less mature organizations are.

So for example, level four organizations have automated data collection and analysis. They’re using it 10 times more than level one organizations, indicating that it’s probably a priority for them. And they’re also paying more attention to data from other HC Systems resources, usage of development methods, which basically means you’re paying attention to how much people are using different development methods within your organization, so you can tweak them or sunset them, or put new ones in if they’re not working well. They’re also paying more attention to talent metrics, key performance indicators, and things like that. This is really, it was a really interesting finding when you took a look at this data, sort of across the organizations, because level one organizations are paying more attention to those. More most common things that we’ve been collecting for years, so worker feedback, line manager feedback, the things that help you create an ROI, to sort of justify your existence as L&D reps.

The L&D departments that I’ve been talking to that fall in that level four have just a completely different mindset about what they measure. They align what they measure to what the organization is measuring. Or what the organization is trying to move. And so, if there’s a turnover problem then they’re measuring turnover. How can we as L&D help with that turnover problem. Or if we need more throughput, how can we help ensure that, that throughput is going up and so it’s much more, it’s much less a focus on did they like it and did they learn something too. How is what we’re doing actually affecting the organization as a whole, and are we helping move those KPIs, and those performance metrics in a way that we need to?

That sadly, is just another interesting point on this particular slide. Level four organizations are actually also two times more likely to hit the basic site worker feedback and line manager feedback, which means that level one organizations just aren’t doing it when it comes to collecting data. A lot of them don’t even use data, and are instead approaching every project like a fire drill. Somebody said we needed this, what’s created versus using the data available to make better decisions.

And that is actually backed up by a chart that’s, I think we discussed when we did a webinar with Axonify. This data was collected in 2015, and it was generated from our last learning measurement study. And this chart shows the metrics that L&D collects or generated plotted against how valuable the business leaders think they are. So you’ll see that those that are most valuable to business leaders are in the blue at the top, and those that we track most often are down there in green at the bottom. So almost everybody collects learner satisfaction feedback, and they collect data on enrollments, statistics, starts, fusion, inclusion, things like that. We’re not saying that those are bad things. We’re just saying if we’re going to join the rest of the organization, as part or that organization, and its learning, and if learning is work and work is learning, and L&D is much more integrated with the rest of the organization, then we need to start caring about the metrics, and the analytics that are important to the entire organization as well.

And then the final finding, let’s see if that, that’s works. The final finding was that data collection and analytics aren’t just things that happen at the end of the course. We usually use courses or initiatives as a trigger to start collecting data. But when we asked how well organizations are at continuous sensing or understanding what’s going on or what’s affecting that, etc. these were the results. So, level four organizations, 55% of level four organizations said that they were either effective or very effective at continuously sensing the organization. Which basically means going beyond the L&D metrics and basically using the data that’s available in the organization to figure out what is going on.

Are our KPIs in range, whether they’re talents shifts, looks like in the next little while? Do we have a hole that we’re gonna need to fill in the next little while? How are people reacting to the training that we’re doing? How are people reacting to, you know what’s the engagement like in the organization. So continuously sensing what’s going on in the organization and providing a point of view or actively taking part in solving the problem where they occur based on data. They think of just be better at knowing what’s going on, and then adapting, based on that data. It means that they’re paying attention to much more than just their own needs.

And that is actually illustrated by this graphic we put together a little while ago, that we’ve entitled Holistic Learning Measurement. This data isn’t telling us that we quit measuring some of the more traditional stuff. We don’t, we need that stuff, but to help us function more efficiently as an L&D department, we should be selecting that data. But we shouldn’t assume that just because that data’s important to us, it’s gonna be important to the rest of the organization. Learning the measurement strategy should be holistic. We should take into account four stakeholders, not just one. The L&D function should be continuously sensing what the organization has and gathering and assessing, and adapting development strategies where necessary. We should be paying attention to effectiveness. We should include things like satisfaction and adoption of new behaviors.

But we should also be paying attention to how well aligned with are with the organization, and a lot of times that data doesn’t come from surveys. Sometimes that comes from conversations with individuals or with upline managers in the field. And so understanding what’s going on, this idea of continuous sensing, doesn’t just have to do with a survey that we’re sending out or looking at the data that is readily available. Sometimes it’s shoe leather. Sometimes it’s getting out and talking to people, and seeing how we are and where you could realign better. And we should also be paying attention to those outcomes as I mentioned because organizations that see learning as work and work as learning are doing much better. The data that we shared with the rest of the organization can either build critically or kill critically, as I think we’re all aware. And if we’re talking about topics, if they’re all talking about topic throughput or turnover for example, and we’re talking about learning satisfaction, there’s a huge disconnect there.

So we should understand how our efforts are affecting performance in key areas and we should be able to speak to it, and we should be able to adapt as necessary. So those are the four findings. I’ve not given very many examples, essentially I don’t think I’ve given any examples because I know Carol has some great stories to share and with that I will pass it off to her, and then we’ll take questions at the end.

 

Carol Leaman: 

Thank you very much, Dani. That was awesome. I’m sure everybody took away quite a few things from that data and quite literally, I think it goes without saying that learning and development professionals, business leaders in fact, have been looking for the Holy Grail of measurement over the last number of years. How did we actually measure what people know, and then what they do and how that impacts the business outcomes? So fortunately, the world has moved on and there is now lots of technology that enables all of the data collection that Dani just spoke about.

So with that, I will turn to the, sorry, I don’t know what just happened there but my screen went away, so I’m gonna oops, Shum, I’m not sure what happened there.

 

Shum:

Hey, Carol. If you just use the arrow keys at the top of the slides, or if you just click on the slides and then use the arrow keys on your keyboard, you should be able to move the slides forward.

 

Carol Leaman: 

Okay. I’m not sure, when you say the arrows at the top.

 

Shum:   

Take a quick look here. I’m just gonna take back control from you for a second here.

 

Carol Leaman: 

Yep, sorry about that folks.

 

Shum:   

No problem. Okay.

 

Carol Leaman: 

All right.

 

Shum: 

So you should see the Measurement in Practice.

 

Carol Leaman: 

There we go, yes. Got it. Got it. Got it.

 

Shum:     

And I’ll transfer control back over to you. And in a second here, you should be able to keep going.

 

Carol Leaman: 

Super, sorry about that folks. So yes, back to the show. It is a very, very competitive world out there today, and as all of you know, there’s a huge … Pick the best people, train the best people. Have them perform at peak and as Dani mentioned earlier, performance really is the end game in all of this. And determining the appropriate risks and adjusting what you’re doing, based on data is absolutely essential if you are going to compete globally in today’s environment. And using technology to help you with all of that, is entirely possible. So, it’s really interesting to see how many level organizations still exist out there. It’s also very heartening to see how many are moving up through that scale and getting towards levels three and four.

So what I’m gonna walk through today are a couple of very tangible case studies that demonstrate some of the things that are going on out there in the world in organizations that are actually doing this, and the results that they’re getting. So the first one I’m gonna talk about is a large global organization. They have over 75,000 employees that are working across many different facilities, and they’re focused on logistics and distribution, and they are large facilities, again widely geographically dispersed. And those particular facilities are in fact dangerous environments to work in. And there is a very strong effort to reduce safety incidents with those 75,000 plus people, because safety incidents as we know, nobody wants. But there is an enormous cost when it comes to in fact, lawsuits, and insurance claims, and time off work, medical visits, etc. when you have that many people who are engaged in risky activities.

So, their objective has always been to build a world class safety culture and they have, over the years, continued to employ various tools and techniques to collect data and they use that data to optimize the learning experience and the knowledge within the organization. So, what they do is they first determine, unlike what happens in many organizations today, they start with what are all the things people need to know. Instead, what this organization does, is it actually focuses on the business outcomes that knowledge should be centered around to impact what people do. And so, they define first, and we always preach this, define from a data collection perspective, what are those key strategic imperatives that you need to accomplish as an organization? So, in this case for example, their strategic objectives were around the reduction of very specific safety incidents that were most prevalent in their widely geographically dispersed locations.

So they defined what those top number of business outcomes were. So for example, reducing ladder safety accidents by X percent. They start with that. The next thing that they look at are, what are the things that people need to do in order to achieve the business outcomes? People do what they know, and so if you need to, in a different example for example, sell more product, people will do what they know. If you need them to place the ladder correctly, they will do what they know. So you need to train them on the do to get the right business outcome. As I just mentioned, people do what they know. If you’re not targeting the right knowledge to the individuals who you need to do something differently, they will never get that business outcome. And then, go back and create your content based on what you need people to know.

So taking this, what we call an outcomes first approach, is very different than what traditional organizations have tended to do. They started with the content, and they’ve not started with defining the business outcomes. Sometimes learning and development folks that we talk to do believe that, they do start with the business outcomes. But they haven’t gotten more granular than in the past. It’s been reasonably generic, and then measuring what people do along the way has been, before recent years, very, very difficult. What you do at that point, and what this organization does, is then uses the outcome data. So they collect the actual incidents, funnel it back to what people do and then what they know, and then adjust their content accordingly.

So, what most organizations have tended to do, and these are the level one and even the level twos is take the content, adjust the content when they don’t get the outcomes, and they miss that middle part. The knowing and the doing very specifically. And they’re hoping that they’re delivering the right content to get the outcomes, but have found it very difficult to measure along the way, that no and that do sector of this particular chart.

So, back to the organization I just mentioned, and back to the specific example. In their case, they are looking at an outcome that says, “Did the number of safety incidences, this specific incidence meet their expectations? And are they going to meet their expectations?” Then they define the do. So what are those specific behaviors that the individual employees need to perform in order to achieve the reduction of safety incidences. So one of them as an example is, was the ladder climbed in the appropriate way? So it turns out ladder safety in large facilities is a gigantic area of loss from most organizations where ladders are in play on a daily basis. And climbing ladders incorrectly, which is done frequently, is often a way that people get injured. Sorry.

And so, what they do is take those behaviors, and they go and capture them very specifically in a tool that helps them define behavior by behavior what the appropriate behavior is and what the inappropriate behavior is. And they have people who go around on a daily basis randomly assigned to collect observations. And so for example, if ladder safety is one of the observations that they need to make that day, they will look down at various individuals who are climbing ladders and identifying whether that behavior is being done correctly or incorrectly. So the data around those behaviors is automatically captured on a regular basis, and this particular organization is capturing through a variety of individuals about one million behavior observations per month. So what they see very, very quickly, as all of that data is being captured are the areas of risk where behaviors are not being exhibited appropriately, and they see the number of behavior counts, they see the specific behaviors that have been observed.

What that leads to is a whole bunch of other predictive capabilities and corrective capabilities that this data leverages. So they take that data and what they do is actually tie it back to what the knowledge state of the population who happen to be climbing ladders actually knows or doesn’t know. So the demonstrated knowledge levels, after these individuals have been trained on ladder safety. So they tie back, was the behavior observation correct? And if not, what did those individuals know in around proper climbing of ladders. And that also, that knowledge is captured automatically through the platform person by person. So you’ve got that extra tie and link of the data from the business outcome, to the do, back to what the individual knew or did not know.

So, here’s an example of how they capture that knowledge. The individual employees are asked questions daily, typically three to five, in key topic areas that are oriented to safety. And so the individual very quickly, goes on and experiences for three to five minutes a day, a question based reinforcement based reinforcement learning experience that targets adaptively, person by person based on that demonstrated knowledge level that I just mentioned. And they get instant feedback when they answer a question as to whether or not they answered the question properly. So where they don’t answer a question properly, that data is captured and tied to the behavior exhibited on the floor as others are observing those ladder safety activities for example.

So that knowledge is then aggregated into topic areas across the population, so that at any given point in time, a learning and development professional or an executive can see the levels of knowledge that actually exist in real time. Area by area that’s important to the organization. So whether it’s safety or something else, all of those topic areas are always being measured as the employees are answering questions, and their knowledge state is captured with the data.

So at that point, you can go back, once you identify the outcomes, what people need to do by way of behavior, and then whether they know or don’t know. You can actually continue to garden and adjust your content to optimize the entire experience. So it’s a self healing kind of activity that continues to work and allow you to identify various pieces of information based on that data. So, for example, this organization discovered because they had built content first and rolled it out, they’d taken the opposite approach to what we prescribe. They built the content, rolled it out, and were not getting the business outcomes that they were looking for. So what they discovered were a couple of things. They had in various topic areas, very high degrees of knowledge, and the behaviors that were being demonstrated and observed were quite good. … behavior change. Knowledge scores were growing and the incidents were being reduced. The actual business outcome was being impacted. So they knew they had that good content.

But they also discovered that there were other areas they created content around where there were very high degrees of knowledge measured through the questioning and reinforcement. But when it came to the behavior observations, incorrect behavior was actually being observed. So people were not, for example, climbing ladders correctly. Even though they were answering the questions correctly. And what they discovered through that was that the content that they had created was not tackling the behavior, and the business outcome intended. So high degrees of knowledge in the topic area but it was the wrong knowledge. So when you have this end to end holistic system where you’re able to measure everything along the way, you can actually discover a variety of things, including whether or not you’ve got the right content.

So, this sort of circular type of experience really involves data collection in the no, do, and outcome areas that feed back into an automatic adjusting of the content. And so it’s this continual optimization and adapting to knowledge levels that really drive the business outcomes in the organization. And in this particular case, the safety incidents of the organization across the board in the first six months actually were reduced by 54% and continuously now for the last number of years are down about 30% over what they previously were and that is a significant savings for the organization. So, measurement is in fact the future, and it really is as I mentioned earlier, such a competitive world that organizations need to be paying attention to data collection, not just through reaction and through whether people are doing completions but everything that happens along the way, and the outcomes that they’re getting.

So here’s the second example. This is also a global logistics company. And in this particular case, the focus was on generating more revenue by individuals who inspect packages, and can go to clients and through the correction of errors, bill the clients more. And so, they had a big focus on decreasing those errors, and they focused their content to get the business outcomes intended, which was the proper inspection of packages, how to bill customers correctly, what things to look for, all of those things that people are doing. So back to the sort of sideways thermometer that I showed. They were highly focused on what are the specific things we need people to know and do, that tie directly to our business impact, which is generating more revenue. And so what they very quickly discovered was that on average, the individuals were able to generate 10% more revenue per person, which when aggregated across hundreds of people, obviously adds up to a significant amount of money again.

So, again it was an approach of not just training people, and measuring completions but starting with what is the business outcome we’re looking for, very specifically tying the business outcomes back to what people need to do, then back to what they know, and then creating the content once those things are identified. So we call this an outcomes first method, and those outcomes being the business data that you’re really looking to achieve. So defining your business goal, if you follow this flow chart from the top left, what are those things that are strategic imperatives for the organization. What are the things that the C-level have talked about in terms of imperatives for the organization for the year, for a quarter, that really are the things people in the organization should be focused on by way of performance.

So what is that performance? And then what are the knowledge skills and abilities required, so that you can build your training plans appropriately, pull out those key learning points, and deliver them in a micro topic format, so that it’s really digestible, really fast, really highly targeted and effective to the specific individuals? Then go and build your content. So, it’s this approach to thinking about learning and data collection along the way, that really drives organizations to be those level four organizations. And completing the loop, so the change in this slide is that arrow on the left where you see it’s back and forth, where you employ technology to gather all of that data, regress it, pull out and extract the things that actually relate to each other and matter. You can actually employ an automatic, what we call healing system, that very individually and personally helps you identify what people need to have content directed them, which goals are being effectively mitigated. Which risks are being mitigated, and how you continue to garden and focus your content all of the way through the process.

So with that, I’ll turn it back to Shum for questions.

 

Shum:     

Awesome, thanks very much Carol and Dani for that fantastic session. We have a ton of questions that have come in actually throughout the session. And folks we do have 15 minutes or so, just under 15 minutes left today so if you have even more questions for Carol or Dani, please feel free to ask them, either in the chat or the Q&A window. I’ve been trying to pick from both of those. Carol, I’ll ask you this first question because we had three different people ask the same question. So I figured we’ll start with that. It’s a fairly popular one, and I know we received a similar question on prior webinars. So, the example that you walked through was a safety example, and it was something that can be sort of tangibly measured because it’s a hard measure of both actions and results. Could you explain perhaps how the model could fit to something with a little less tangible? So, for example if it were things like sales behaviors, or maybe it was something like leadership training or something like that. Can you speak to how the model would fit that?

 

Carol Leaman:

Yes, it absolutely can, and the real trick is defining what the outcomes that you’re looking for specifically are. So, if we take a sales example, let’s just … 10% growth rate in a particular product line in terms of sales. And so what you need people, that’s your business outcome. What you need people to do, so define the behaviors. You need people to have high degrees of product knowledge around that particular product suite that you’re looking to push into market. You need people to be able to articulate the competitive differentiators that, that product may have with other product. You need your sales person to be highly skilled in terms of sales process. So follow a process. Those are just some examples of some of the behaviors that you could capture and observe. And then look at what does the person know. So create your content around it and measure what does a person know in these top … outcome of that individual in terms of selling that product with the behaviors observed with what they knew in those topic areas. So there’s an example for sales.

In terms of leadership, it is always a little more nebulous, a little more soft skills are more difficult to measure. But you can … at 360 sorts of scores, just observations around things like conducting of meetings or tone of conversation. You can actually have people observe those things and how that sort of continuous peer review of the specific behaviors you would like a leader to exhibit for example. Are they getting their performance reviews done on time? There are certain activities you can also measure from a due perspective. And so, where you see an individual not behaving appropriately in the workplace, look at what they know. And understand what their knowledge scores are. It really does help you identify where you’ve got issues all along the way.

There are individuals for example where you can capture the business outcome and the behaviors, and they have high degrees of knowledge but for some reason they’re just not performing. And those people tend to pop out as outliers. But again, it really comes down to understanding what you want them to do that leads to the outcome. Those are the trickiest parts but when you put some thought to it, there are always things that you can define and measure in those areas.

 

Shum: 

Awesome, thank you very much, Carol, and so this question Dani, this ones for you. And it goes, it sort of, there were a few questions asked about the data that you presented at the beginning. What I’m gonna try and do, I’ve taken back control of the presentation so what I’m gonna try and do as you’re answering the question is maybe refer back to some of your slides. But could you just speak, maybe at a high level as to how the study was conducted? We had a couple of questions about the size of organizations that were included as different levels. What sizes in organizations level one versus level two, was the data self reported? Something like that. So if you can maybe speak at a high to some of that, Dani, and I’ll just switch back over. I’ll move the slides accordingly.

 

Dani Johnson: 

Yes. From a very high level and you know, anyone who wants to contact me later and talk statistics, happy to do it. But just from a very high level, the data was kind of self reported. We asked two major groups of people about their organizations. We asked the L&D department and then we also asked the managers. The managers depending on the structure, holding pieces within the organizations. So this time, more than any other time, I think we have a really good view of what’s going on. Once we got our model created, we actually went and got all of the financial data that was available for the companies that participated, and ran those again, just to make sure that even with the self reported data, the model is still valid. So yes it was self reported, and it was self reported within the company. We had managers and L&D people both responding to each other’s particular survey.

And the other question was size of company. We had over, we had almost 900 companies participate, and the size of company went from, I think we discounted anybody under 100 and went up to 500,000 people. And so we had a very good and strong mix across all those levels, enough to be statistically valid for all of them.

 

Shum:

Awesome, that’s fantastic. Just a reminder folks, I’ve had a few questions come in about the slides and recording. I know there were intermittent audio issues throughout the session, but please don’t worry. The recording would capture all of the audio perfectly well. And we’ll get a copy of the slides and recording fairly shortly. So, Dani this question is again for you and Carol, we’d love your input on this one too. I’m just gonna switch over Dani, to one of your slides that sort of highlights these levels again. The question comes from [Avisa 00:52:43] and she asks, “Do you have any advice as to how organizations can move up this maturity curve in a realistic way, especially if the data isn’t readily available?” So maybe I’ll turn that into a slightly bigger question, talking about, if you’re an organization just getting started with measurement, how can you get started? How can you address something like this model, and then Carol, I’d love to get your thoughts after Dani has shared hers.

 

Dani Johnson: 

Yeah. I think that’s a really good question. The biggest thing that I saw, the biggest thing between levels that I saw that would sort of determine where they fell, was their mindset. So organizations that were sitting at level four, their mindset was, “Hey, we’re part of this organization and we need to help the organization move forward.” Rather than, “Hey, we’re the L&D team and we need to help these learners learn something.” It was really stark in the data they collected, as well as the data that they looked at and the conversations they were having, so the very first thing I noticed may sound a little bit pithy but the very first thing I would recommend is, sort of get your head right as far as how you’re talking about learning, and what think learning is in the organization. And then getting everybody else on board about how you want to think about that.

And then the second thing is, there’s data available everywhere. You just have to know where to look for it, and so taking, if you have a data analytics department, going to them and seeing what kind of data is available. But then just taking a look at the reports that are created in other parts of the organization to see what types of data that they have available is a really good place to start, before you even have to gather any of your own data or create any of your own reports. So just some education advice.

 

Shum: 

Awesome, thank you. And Carol?

 

Carol Leaman: 

Totally agree with everything Dani just said. It is in fact really interesting to us. We gather tons of data from clients, and we really try to focus them on the things that we think impact the business outcome. So there is, as Dani just said, a million pieces of data that you can collect but some are more important than others, and so what we do is we help our customers figure out what those pieces are that are most impactful that are gonna drive the business outcomes and then do the correlations automatically to figure that out.

 

Shum: 

Awesome. Okay, this one Carol, I’m gonna start with you on this one. But again, Dani I’d love to get your thoughts on it as well. And I’m actually gonna skip ahead a whole bunch of slides towards the end here. And move back to this outcomes first model that you were speaking to Carol. So this question comes from Alan, and Alan says that, “We struggle getting well thought out business objectives. So do you have any advice for maybe how to think about business objectives as they might perhaps relate to employee learning and thinking back through this model?” So any thoughts there for Alan? I know that can be a fairly big question. Any thoughts you have would be great.

 

Carol Leaman: 

Yeah it’s, that is a great question. Setting a business objective is often this big hairy, it feels like such a huge thing, and how do you drill it down to what’s important? My advice is always start small and tangible. So something as simple as, we want to grow sales of our product line by X. Or, we want to reduce this specific safety incident that we are incurring by Y. Just pick like three. You don’t have to boil the ocean, you don’t have to try to think of the biggest most, hairiest thing on the planet. Just start with something tangible, and then work backwards. And once you start to see the results of that, do the measurement.

It’s like running an experiment. You want to start with something that you can control. Get the engine revved, get the data collected, do the analysis, and then sort of expand from there. So, you know people have all these lofty corporate goals, and things like that, that are hard to turn into something tangible. Just really start with the most tangible things you can think, and then expand out from there.

 

Shum:   

Awesome. Dani, anything to add there?

 

Dani Johnson: 

Yeah, I agree. I think that’s great advice. A lot of the organizations I talk to that just kind of do this pretty well, they start with a problem. So, they clearly outline what the problem was. “What’s the problem?” “Well, we’re losing people too fast.” “Okay, what’s L&D’s role in solving that problem?” and then walking back from there. “Okay, what are the metrics were gonna use?” Etc. etc. So just starting very clearly, being able to articulate what that problem is. Sometimes gets used, sometimes gets used as a business objective pretty fast.

 

Shum:     

Awesome.

 

Carol Leaman: 

That’s a great way to put it.

 

Shum:   

Yeah, thank you very much. And Dani, this question goes to you. You mentioned you didn’t have slides on this particular topic, but we have a question about the Net Promoter Score. This one was from Charlese, so could you just explain the Net promoter Score? Well first of all, what exactly it is, and then maybe how it relates to L&D. I’m gonna try and get you back to certain part of your slides, Dani but you didn’t have slides on it, so maybe I can just skip ahead here and get maybe somewhere there, and we’ll leave it there.

 

Dani Johnson: 

Yeah. Absolutely. First of all, I was halfway down the drive before I realized it was gonna be difficult without a visual so I apologize. A Net Promoter Score is actually something that marketing departments use to measure customer satisfaction. And that’s a score that goes between negative 100 and 100. And it basically is one question is that they ask their people, and the question is usually something like, well in our case, the question that we asked was, “How likely are you to recommend developmental opportunities within your organization to a colleague?” So that’s what we asked them and what happens is you take, it’s a scale between zero and ten, and you take zero through six and you subtract those scores from the nine and ten, and that’s what your Net Promoter Score is. So it basically says, “These many people are against you, these many people are for you. That’s your score.” And so a negative 33 means that if you had a hundred people, 33 more of them would be talking negatively about you than positively about you.

And so how it has to do with L&D is, I’m seeing more L&D departments using a Net Promoter Score to get just a general consensus about how people feel about development within their departments or within their organizations. And so, again those level one organizations, there’s a huge discrepancy between what L&D people think that a Net Promoter Score is and what non-L&D people think a Net Promoter Score is. And so in the case of level one, it’s negative. It’s a 42 point difference whereas those level four organizations have a much better handle on what’s going on in their organizations, because they’re collecting the data and talking to the people, and have the sentiments and so the gap is only 13 points for those level four organizations.

In the new research, I talk a little bit about that, and Josh is using, Josh Person is using that stack quite a bit lately, and so you can read about it in the Human Capital Trends Report there and in a couple other places we have.

 

Shum:   

Awesome, thank you very much, Dani and actually that’s a great point to end it on folks. We have reached the top of the hour here. I know there are a few questions that were lingering and unfortunately we couldn’t get to those today, however I have recorded all of them, and I will make sure someone follows up with you about your questions. A few of you asked about follow up resources and where you can learn more about the sorts of things that Dani was talking about, and that Carol was talking about. Both Bersin and Axonify have fantastic resources on the topics of learning measurement. It is Dani’s area of specialty. So feel free to follow along with those online. We will be following up with this as a recording for everyone who registered for the webinar, so keep an eye out for those.

Carol, Dani, thank you so much once again for your time. It’s always a pleasure having the two of you together.

 

Dani Johnson:   

Thank you so much.

 

Carol Leaman: 

Thanks, Shum.

 

Shum:   

Everyone have a wonderful day ahead.