On the long slow death of Twitter

I came across this great cartoon in The Guardian over the weekend. It sums up much of what I think about Twitter – except for the final frame.

The cartoon uses the analogy of Twitter being a bar and concludes by wondering if “maybe it’s time to find another bar.” For me, Twitter feels more like that bar you went to at a certain time in your life, but now you have other things to do. You look back at it with good memories, but you don’t need to replace it.

Learn xAPI MOOC – Week 4 Reflections

Here are my thoughts after completing week 4 (the final week) of the Learn xAPI MOOC. This week is titled “Final Challenge and Post Conference Drinks” and it brings together the technical and strategy tracks.

There are three key questions:

Discussion Point: Learning analytics is too important for L&D to own

I was surprised (and chuffed) to see that the first question was inspired by the blog post I wrote on 28th May. This was a response to a post on learning analytics written by Mark Aberdour.

There were great comments from everyone and I’m really pleased to have contributed to the debate.

Due to semantics (or perhaps just me not being clear in the first place) there were some comments around responsibility and accountability. To clarify – I believe 100% that L&D needs to take ownership for results and to some extent that will include learning analytics.

My point was more that if organisations are going invest in employing data scientists (as Mark suggested in the original post to which I was responding) they need to do their work at a level above L&D (and every other department). If the measurement and the analysis of the data remains within L&D there is a very high risk that all we end up measuring is our own performance in the context of our own measures (was it a good piece of learning material?) rather than the impact we are having (did it make any difference to workplace performance?).

L&D certainly needs to be involved involved in learning analytics. The alignment of learning analytics to performance outcomes is something that should begin at the earliest stages of design.

One of the hardest part of analysing any data is working out what it actually means. I’m less sure that L&D should be the ones who decide what the data means – that’s where a data scientist (or whoever) looking at this at an organisational level comes in.

Discussion point: Most L&D teams lack the skills and mindset to make effective use of meaningful data. Do you think L&D teams have the potential to develop the necessary skills and will their organisations give them the opportunity to develop them?

This is the question that I was asking myself at the end of week 2.

First of all, this isn’t a question of capability – I expect any good L&D team would be capable of developing the skills. To what degree they should develop these skills will probably depend largely on the size of the team (in small teams it may be more about mindset and understanding rather than deep skills).

Whether they will have the opportunity is harder to answer and this will probably depend on how they are perceived by the organisation. My thinking has moved on a bit since week 2. Back then I was wondering if L&D will be given the opportunity. Now I think that opportunity isn’t something they should wait for – they should own this and get on with developing those skills.

Discussion point: How do we make sure that we don’t get carried away with what’s possible and instead focus on what is valuable? And lastly who is it valuable to?

I think these are the big questions and the ones that I’ve been thinking about since week one.

Focussing on the valuable rather than the possible must be one of the most common challenges facing anyone who works with technology and learning. The only way we do this is by understanding what is valuable to the organisation.

That means going beyond being order takers who simply satisfy the needs of the immediate stakeholder who comes to us for a solution. We need to be able to consider everything we do in the context of the organisation’s goals and, when necessary, challenge the stakeholder if they demand the wrong solution.

The rest of the week’s content was focussed on practical activity around, visualisations and telling stories with data.

Summary of my thoughts after four weeks

I haven’t reached a definitive conclusion about xAPI, but that wasn’t my expectation, and my ideas will continue to evolve. Here is a snapshot at week 4:

  • I’m interested in xAPI in that it changes the technical tools we have for measuring activity. If we really need to track activity, xAPI goes beyond many of the limitations of SCORM (such as tracking activity on mobiles and in apps).
  • Whether or not we need to measure activity is another thing entirely. I’m not against measurement, but it needs to be the right measurement and it needs to be actionable.
  • I’d be surprised if we really see many organisations tracking anything other than activity.
  • I fear that clumsy attempts to use xAPI too widely will degrade the experience for learners.
  • After four weeks I still can’t see how the learner benefits from xAPI. The benefits are to L&D and maybe to the wider organisation if they get the data analysis side of it right.
  • xAPI tracks activity (albeit a wider range of activity than SCORM) but it does not track learning.
  • The name is terrible. Domain specific acronyms make communication difficult and put people off by making them outsiders. See here.
  • People seem keen to use it outside L&D, but I think that may be difficult given that it is designed by L&D people, for L&D people to solve the problems that L&D people have.

Learn xAPI MOOC – Week 3 Reflections

Here are my thoughts after completing week 3 of the Learn xAPI MOOC.

The strategy track for this week is titled “Data and Learning Analytics”. I’d already been giving this some thought (see here) as I have concerns about the real benefits of even the simplest tracking and measurement, let alone ‘learning analytics’…

This week started off by addressing the difficulty that people often have in grasping the difference between correlation and causation. This was done by sharing a page from Explain xkcd (which is just brilliant!). I’ve bookmarked this for future use.

There was only one discussion point that interested me this week:

Discussion point: What sources of data could you be tapping into to build a better picture of the connections between training and performance?

There are, I think, only two possible answers:

  1. If you are considering this at a high level, then the answer is simply “whatever sources of data the organisation uses to measure performance”. So if you want to know if your customer service training programme has made any difference, you have to start by looking at whatever data source the organisation uses to measure customer service. Good luck with finding anything other than a superficial link.
  2. The alternative is to consider this at a much lower level . For every training intervention we should be identifying the intended performance outcome at the very earliest stages of design. At that stage we should also be identifying what data we need to measure that performance and from where to source that data. If that data source doesn’t already exist, there’s a good chance that’s because what you intend to change isn’t important enough to measure.

The rest of the week’s content was focussed on analytics, data, visualisation, privacy and other general data related subjects. All interesting topics, but nothing I hadn’t already given a great deal of thought to.

Summary of my thoughts after three weeks

I’m still trying to make sense of xAPI, and after three weeks here are the main threads I’m considering.

  • Before starting the MOOC I suspected that most of the talk about measuring “learning outcomes” actually referred to measuring “training outcomes”. After three weeks on the MOOC, I’m now sure that’s true.
  • I worry that there is more interest in using xAPI to prove a link between L&D and performance in order to demonstrate the value of L&D rather than to actually improve workplace performance.
  • Just as I did at the start I can see how tracking activity may bring plenty of benefits to L&D and pretty much none to learners.
  • In theory we could use xAPI to track a huge amount of activity, but I haven’t seen any compelling argument as to why we should. There seems to be an underlying assumption that measurement is a good thing (you only have to think about school league tables to know that it isn’t).
  • One of the more pertinent things I’ve read on the subject recently is this post from Henry Mintzberg. I’ve seen a lot of “measuring as a replacement for managing” within L&D.

The biggest worry I have is that in trying to measure too many things we will actually degrade the experience for learners. There are a few ways this could happen.

  1. We identify specific activities or content that we want to track and to make that easier to do we move them somewhere they are easier to track – you know, like an LMS.
  2. We give the learner some kind of tool (like a browser bookmarklet or app) and say “each time you learn/do/experience anything, just click this so we know about it. Oh, and make sure you pick the right verb.”
  3. We find some diabolical way to track everything they do and then analyse it in search of learning activity. Seriously, stop thinking about that right now.

Learning analytics is too important for L&D to own

I’m clearly not the only one to recognise that L&D lacks the skills needed for the kind of learning analytics enabled by advances such as xAPI.

Mark Aberdour has written a very thoughtful post about the challenges we face and makes this suggestion:

Clearly some of these items require close interaction with the L&D team, but in summary there is a real need to bring experienced data scientists into corporate learning and development, not just to set up analytics programmes but to continually monitor, review and refine the results.

via Building a learning analytics platform | Open Thoughts.

I agree that organisations need people who can interpret this data and make it actionable, but I don’t believe they should sit within L&D. If ownership for learning data remains within L&D we risk continuing the current situation where all we do is measure the most basic elements of our performance (inputs and outputs) rather than the impact of learning on workplace performance.

For learning data to have strategic value it needs to be considered at a higher level, in combination with data from other parts of the organisation. To be objective, ownership for this needs to sit outside any department with a vested interest in the results.

Learn xAPI MOOC – Week 2 Reflections

Here are my thoughts after completing week 2 of the Learn xAPI MOOC.

The strategy track for this week is titled “Building the Business Case for Data”, which suggested that it might hold the answers to some of my questions from week 1. It didn’t.

This week I’ve included some of the discussion points from the MOOC as well as my reaction to them.

The introduction to this week suggests that we can use data to evaluate not just learning activity but the learning experience. However, I have a couple of big doubts about this:

  • I love the idea of being able to assess the end user’s learning experience, but I don’t think that is what xAPI is going to do (at least not as it’s being described in this MOOC). What it seems to be talking about here is measuring the learner’s experience of a training intervention. I’m not saying that there is no value in this, but it’s not the same thing.
  • Given that the people who complete a learning activity are rarely the people who commission the development of that learning activity I’d be interested to know how successful people have been in getting the go ahead to make changes and further develop the learning activity based on that data. If the need of the person who commissions the activity is to “get something out there that proves we’ve done health and safety training” how much will they care about the learner’s experience?

Which brings us nicely to this question in the MOOC.

Discussion point: Does L&D deserve it’s place as a key influencer in business strategy? Or are we playing second fiddle to other departments and their needs?

In the video for this section Sean Putman suggests we need to think about who are the customers for our learning interventions and who are the customers for your learning data?

This is logical advice, but with very few exceptions the customer for both of these is usually someone other than the learner. Indeed, it’s quite common that the customer is someone far detached from the learner (and thus even further detached from the organisation’s customer’s).

L&D puts itself into the place of playing second fiddle when it sees those other departments as its customers and does nothing more than take and satisfy their orders. If L&D wants to be treated as an equal it needs to behave like one – have an opinion, develop its own plan for supporting the organisation’s strategy. xAPI alone isn’t going to fix this, but it could give L&D more data to work with – if it knows what to do with it.

Discussion point: If you actually wanted to measure the performance impact of your learning solutions, who else would you need to work with? Do you think this would be easy to achieve, or are you likely to face road blocks?

It is scary (although accurate) that this question starts with “if”. The fact there is any doubt that L&D might want to measure the performance impact of learning tells you a lot about the state of L&D today.

However, my own experience is that even when that kind of analysis is offered to stakeholders, they don’t want it. I think that is a result of the order/supply relationship that in many cases exists between other departments and L&D.

Discussion point: How could you use this approach in your organisation? What data would you collect and why?

This question was asked in response to this blog post.

I like this idea of generating xAPI statements from the the software that someone is using. However I think I’d be more interested in how I could use the data to improve the user’s experience of the software rather than to better train the users.


The additional data which xAPI can generate makes it even more important that L&D understands what it is they expect to change through any learning intervention and what actually needs to be measured to see if that change has happened. Defining, collecting and analysing this data is not an easy task – it requires a skillset that few L&D people have and it will be time consuming and costly to do.

As with last week, I’m left with more questions.

  • Will L&D be given the opportunity to develop these skills?
  • Will their customers be prepared for the additional time and effort required to develop solutions?
  • Will an industry pop up around this, with vendors selling promises of systems that do all of the analysis for you?

Learn xAPI MOOC – Week 1 Reflections

Learn xAPI week 1 logo

Last week I made a start with the Learn xAPI MOOC, which is being run in Curatr and organised by the tried and trusted team of Ben Betts, Martin Couzins and Sam Burrough. This is the first MOOC that I’ve started which I can actually picture myself completing.

For me, the format is spot on – short, focussed pieces of content delivered by people that really understand the topic (who appear to have a real passion for it) with lots of opportunities to share and interact. They’ve also recognised that not everyone has the same kind of interest in this as a subject and so have separated out the content into two tracks – strategy and technical.

A bit of background

I’ve been aware of the Experience API (xAPI) for a long time, and I understand the basics of what it is intended to do:

The xAPI enables tracking of learning experiences, including traditional records, such as scores or completion. It also stores records of learners’ actions, like reading an article or watching a training video.

That description is sourced from this page on the ADL website and if you want a purely functional description of what xAPI is, it’s a good starting point.

I also understand as much as I currently need and want to about the technical aspects of how it is implemented. What I don’t understand is why we should use it and in what context. I have therefore chosen to go through the strategy track.

Week 1

I’ve come away from the first week with more questions than answers – and that’s great. The MOOC has got me thinking about xAPI in much more detail than before. Here are the things that I’m currently pondering:

  • I get that xAPI allows us to track more things than SCORM, in more ways and with much richer data. What I currently don’t get is why we would track them at all.
  • Is knowing that someone has attended a conference, watched a video or read an article somehow more valuable than knowing that they have completed a piece of elearning?
  • xAPI has the potential to produce a lot of data. Other than storing it in a Learning Record Store (LRS) what do you do with it all? Organisations are generally pretty poor at using the data they have now – will they be able to do a better job with even more of it?
  • For it to have value, data needs to be meaningful and actionable. As far as I can tell the xAPI standard makes no attempt to address this – which makes sense. While it may be possible to standardise the mechanisms for structuring, recording and storing the data, the meaning of that data will be unique.
  • There is much talk about using xAPI to record performance data (which is terrific) – but in that case why is the data kept in a Learning Record Store? That name needs to change if it is to be taken seriously by anyone other than L&D.
  • While I’m thinking about it – I’m not yet sure that an LRS records learning any more than an LMS manages it.
  • Who benefits from the xAPI?
    • I’m pretty sure that so far all of the potential benefits I’ve seen broadly relate to the organisation, L&D or the individual learning designer.
    • I can see that there are benefits to the learner, simply because if the organisation wanted to track activity in a SCORM world, it had to be SCORM content, whereas xAPI seems to be able to track pretty much anything. Whether the learner benefits from that tracking is another question entirely.

I’ve really enjoyed the first week. I certainly know more about xAPI and even if I don’t have the answers yet, I’m getting a better handle on the questions I need to answer.

Goodbye Surface Pro 3

After I’d been using the Surface Pro 3 for two weeks I wrote a brief but positive post about the experience – so am I as positive now that I’ve been using it for two months?

Quite simply, no. Why I feel that way is perhaps less simple.

I do like the Surface Pro 3 as a device, and I really like the hybrid form factor and the pen (I’ve been a Mac user since 2006 and when the iPad was launched I was disappointed that it was an iOS tablet not a Mac one). When Microsoft launched the original Surface it seemed like the ideal form factor, but it was far from perfect and the big problem for me was that it was running Windows.

However, with the Surface Pro 3 Microsoft seemed to have ironed out most of the issues and it was a much more appealing option.

So what’s wrong with it? Well for me there are four things:

1. Windows being Windows

In one of the earlier posts I expressed a concern that the friction of moving from one operating system to another would be a distraction that I was too busy to deal with. In fact that didn’t really happen – although I’m willing to concede that there may have been some confirmation bias going on here.

Unfortunately, that lack of friction didn’t last. Twice in the space of a week I went to switch on the Surface only to be faced with the ‘Windows is applying updates’ screen. The first time it happened it was quite annoying, because (a) I was busy and (b) once the update had finished it then rebooted and started applying further updates.

The second time was very annoying, because I had a client on the phone and I was trying to find some information she needed urgently.

The final straw was a horrible flashback to ten years ago, and my original decision to quit Windows and move to the Mac. Back in 2005 I had been working on a document for most of the day, and when I was done I dragged the file from one folder to another only for it to leave the source folder and never appear in the destination folder. I wasted a further half a day trying to recover it before giving up.

Last week I had the same experience with a disappearing file. This time it wasn’t such a big deal – the file was less important, and it was backed up anyway – but the experience was enough to undermine my trust.

2. The software sucks

If I think about this objectively – was I able to do everything I usually do with the Mac without additional effort? – the answer would be yes, pretty much. Certainly, in the past two months I haven’t come across anything that I couldn’t do.

If I think about it subjectively – was the experience of doing everything as good on the Surface as it is on the Mac? – then the answer is no.

This is less about Windows vs OS X as operating systems and more about the software that is developed for them. Some software, particularly the large software suites, is good on either platform. Adobe’s Creative Cloud apps are pretty much indistinguishable between Mac and Windows, and Microsoft Office (unsurprisingly) is better on Windows than on the Mac.

What’s missing from Windows is the vibrant third party developer community and the really great apps they produce. I found myself missing the kind of really great apps that Mac developers produce.

For example, if you consider Sketch, Omnifocus, Ulysses and Alfred then sure you can find apps on Windows that do the same job but the user experience just isn’t as good.

3. The general niggles

Although I have said I like the device and that it improves on previous versions, it still isn’t perfect. There were some things that I forgave at first, but which really niggled me after a while.

One of the device’s best features is a beautiful high definition screen. Unfortunately it’s marred by the number of apps that haven’t been optimised for higher resolutions and which look fuzzy. Although Apple devices suffered similar problems when they first introduced retina displays, I can’t remember the last time I saw an app that hadn’t been suitably optimised.

A related problem is the inability of the Surface to handle multiple resolutions. I have a MacBook Pro with a retina screen, and if I plug it into an external monitor it has no problem using a different resolution on each screen. On the Surface you can only have one resolution at a time, so either the external monitor or the Surface’s own screen will run at a less than optimal resolution.

Even when you are using one screen, the experience still isn’t great. If I plug the Surface into my external display (which isn’t high definition) it will recognise this and scale icons, text and other screen elements accordingly. The trouble is that if I then unplug it from the external display and just use the built in screen the Surface doesn’t adjust the scaling, leaving some elements too small to use and others large and odd looking. The only solution is to sign out of Windows and back in again.

If you only occasionally swap from docked to an external display to using the internal screen, you may not find this annoying – but then why are you using a hybrid device? I chose a hybrid device because in theory it could replace the MacBook/iPad combination that I had previously been using. Which leads me to…

4. The form factor doesn’t work (for me)

For a long time I was sure that this hybrid form factor was what I needed, but in practice it didn’t work so well for me.

Way back at the end of 2011, Jon Gruber suggested that the key distinction between Microsoft and Apple with regard to UI was that Apple had embraced compromise, whereas Microsoft were clear that there should be no compromise – your desktop and tablet operating systems should be one and the same.

That idea of a universal operating system (which Microsoft are pursuing even more strongly with Windows 10) still appeals to me. I just think it’s incredibly difficult to do well, not least because you need to engage your developer community to produce apps that work well in all formats.


Over the past few weeks a few people have asked me if they should buy one. With the caveat that perhaps they should wait until the Surface Pro 4 is announced, my answer has been yes.

Despite everything I’ve said above, I do think it is a great device in the right circumstances. If you are a Windows user and happy to remain one, it’s a good choice of device.

I think it could work well as the sole device in a non-techie household, where it would be as at home in tablet mode on the sofa as it would in laptop mode when needed. I think it’s those situations where you only want one device and you’re only occasionally swapping between those modes where the hybrid format can work well. Where you are regularly shifting from one to the other it works less well.

For me, it just doesn’t work well enough and I’m back to my MacBook Pro and iPad combination. Although carrying two devices is in itself a compromise, it’s one that gives me the best laptop experience and the best tablet experience.

Noddlepod news

Back in 2011 I wrote about Noddlepod, which at the time I described as the best collaboration tool you’ve never heard of. Since then they’ve gone from strength to strength and I was delighted to see a big announcement from them. Not only have they received a funding package from Finance Wales and four angel investors, they’ve been joined in an advisory capacity by Charles Jennings, Mary McKenna and Nigel Paine.

I’m really delighted for everyone involved. Ollie and Stephen Gardener came up with the idea in 2010 and had already developed it into a company that employs six people – this new investment should help them continue their mission to revolutionise community learning.

You can read the official announcement on the noddlepod blog.

Surface Pro 3 – For Keeps?

Following on from my earlier post, the 14 day return window is nearly up and my decision is made – the Surface Pro 3 is definitely not going back. When I placed the order I figured that there was a 50/50 chance that I’d return it. That wasn’t because I had concerns about the quality and capabilities of the device itself, but more that after nine years of using Macs there would just be too much friction trying to use an unfamiliar operating system – and I’m too busy for those kinds of distractions. But that didn’t happen.

On the evening of the day it arrived I installed the key apps that I needed, signed in to Office (I have an Office 365 account) and left it overnight to sync Dropbox, OneDrive and Evernote – I have a lot of data and a slow broadband connection.

The next morning I started using it, expecting that within hours I would be so frustrated that I would reach for my familiar and trusty MacBook Pro. It didn’t happen that day, or the next, or any day since. In fact, after spending a week sat on my desk unused, the Mac is now shut away in a cupboard.

The purpose of this two-week experiment was to see not just if I could use the Surface Pro to replace the MacBook/iPad Air combo that I’ve been using, but if I would actually want to. The answer to both is a resounding yes!

The Surface Has Landed

I’ve been interested in the Microsoft Surface since it was first introduced, but the first and second versions didn’t quite seem good enough. When the Surface Pro 3 was launched last year it looked like the device had matured into something really nice. However, as a long time Mac user I’m heavily invested in the Apple ecosystem and at the time had doubts about whether it was the right device for me.

Fast forward to now and I decided that the only way to find out was to try it for myself. After trying one out in store, getting some advice online (thanks @craigtaylor74 and @davefoord) and helped by the fact there was a sale on I ordered a Surface Pro 3 i5 with 256gb of drive space and 8gb RAM.

I bought it from the Microsoft Store because that way I’m covered by a 14 day no questions asked return option if I don’t like it. It actually arrived at the end of last week, and I’ve given myself until this coming Friday to decide if it’s staying or going back. Either way, I’ll be posting the decision here.