Here are my thoughts after completing week 3 of the Learn xAPI MOOC.
The strategy track for this week is titled “Data and Learning Analytics”. I’d already been giving this some thought (see here) as I have concerns about the real benefits of even the simplest tracking and measurement, let alone ‘learning analytics’…
This week started off by addressing the difficulty that people often have in grasping the difference between correlation and causation. This was done by sharing a page from Explain xkcd (which is just brilliant!). I’ve bookmarked this for future use.
There was only one discussion point that interested me this week:
Discussion point: What sources of data could you be tapping into to build a better picture of the connections between training and performance?
There are, I think, only two possible answers:
- If you are considering this at a high level, then the answer is simply “whatever sources of data the organisation uses to measure performance”. So if you want to know if your customer service training programme has made any difference, you have to start by looking at whatever data source the organisation uses to measure customer service. Good luck with finding anything other than a superficial link.
- The alternative is to consider this at a much lower level . For every training intervention we should be identifying the intended performance outcome at the very earliest stages of design. At that stage we should also be identifying what data we need to measure that performance and from where to source that data. If that data source doesn’t already exist, there’s a good chance that’s because what you intend to change isn’t important enough to measure.
The rest of the week’s content was focussed on analytics, data, visualisation, privacy and other general data related subjects. All interesting topics, but nothing I hadn’t already given a great deal of thought to.
Summary of my thoughts after three weeks
I’m still trying to make sense of xAPI, and after three weeks here are the main threads I’m considering.
- Before starting the MOOC I suspected that most of the talk about measuring “learning outcomes” actually referred to measuring “training outcomes”. After three weeks on the MOOC, I’m now sure that’s true.
- I worry that there is more interest in using xAPI to prove a link between L&D and performance in order to demonstrate the value of L&D rather than to actually improve workplace performance.
- Just as I did at the start I can see how tracking activity may bring plenty of benefits to L&D and pretty much none to learners.
- In theory we could use xAPI to track a huge amount of activity, but I haven’t seen any compelling argument as to why we should. There seems to be an underlying assumption that measurement is a good thing (you only have to think about school league tables to know that it isn’t).
- One of the more pertinent things I’ve read on the subject recently is this post from Henry Mintzberg. I’ve seen a lot of “measuring as a replacement for managing” within L&D.
The biggest worry I have is that in trying to measure too many things we will actually degrade the experience for learners. There are a few ways this could happen.
- We identify specific activities or content that we want to track and to make that easier to do we move them somewhere they are easier to track – you know, like an LMS.
- We give the learner some kind of tool (like a browser bookmarklet or app) and say “each time you learn/do/experience anything, just click this so we know about it. Oh, and make sure you pick the right verb.”
- We find some diabolical way to track everything they do and then analyse it in search of learning activity. Seriously, stop thinking about that right now.