Engagement and motivation through badges

Tamagotchi by stopsign, on Flickr
Creative Commons Attribution-Noncommercial-Share Alike 2.0 Generic License  by  stopsign 

Feed me!

When considering what motivates students to participate in a learning intervention, learning designers spend a lot of time designing activities to engage and interest students. However, research indicates that students tend to respond to the assessment regime rather than the learning objectives. This means that students will look at what is required to pass the course (the hidden curriculum) and then make sure that their activities focus on what is required to fulfil the requirements of that particular task (Gibbs, 2010).

I have found this when working in a corporate e-learning environment, where the learners’ objective is to get through the multiple choice quiz as to be deemed competent, so the approach is to quickly click through screens of an elearning module (or in some cases ask a colleague to do it). In my Master’s course some students have never appeared in any online discussion forum or participatory activity as it is possible to pass the module by turning in the written assignment (which I presume they did). This is understandable and a sensible strategic approach to gaining a qualification or fulfilling the needs of a workplace learning scheme. Is it learning? Well, it’s really impossible to say what the participant has or hasn’t learned but they have not ostensibly participated in the learning process that has been designed by the instructional designer or faculty.By process, I mean the pathway comprising the range of activities that are designed to help the learning and practice of skills and competencies that will help the learning objectives – in short the learning design. For example, an activity to  stimulate dialogue and build peer review skills might ask students to read a paper on a subject and post a summary and response in a blog post, inviting comments.

Non-participation in activities become more visible in an online or blended environment as students dn (or do not not) leave digital footprints. Yet,  as these activities are ‘just learning’ and have no bearing on the assessment (typically an end of module exam or paper), then a significant proportion of students don’t bother. This can diminish the effectiveness of online courses which often are designed for and depend on learner-learner interaction.

If it is assessment rather than the learning activities that motivate (or spur) the larger proportion of students to participate in the learning process, a close alignment of the learning and assessment process could motivate more students to participate. This is in itself not a new concept: good teachers have always taught in a way that aligns the learning that takes place in a classroom with the assessment. This could mean that artefacts that are produced as a result of learning activities are included as part of or necessary to the assessment itself, or that assessment is ongoing requiring frequent participation and engagement. In formal online learning and in MOOCs this poses a challenge due to the distributed and generally asynchronous nature of online learning, and is used with care by learning designers as it then affects the flexibility aspect of online learning. After all it is this type of learning that attracts  the type of person who might not be able to study in any other way (formal campus based learning requiring the necessity of being in one place at a certain time). Working professionals or people with caregiver responsibilities may appreciate the value to flexibility that online learning enables.

What methods or strategies might spur engagement or at least encourage learners to participate in the learning process as the learning designers intended? I wrote about Open Badges for my final (possibly ever) assignment for the Open University Masters in Online and Distance Education. Open Badges are digital artefacts that learning designers can use in a course to recognise achievements or reward participation at a lower level of granularity than traditional assessment and so might be able to motivate learners due to the element of immediate gratification or gaming.  Learners can earn badges for specific tasks within a course that acknowledge a skill or completed activity, thus building and visibly showing progress and gaining something tangible they can show on a profile. Open Badges are embedded with meta data and can be displayed by the earner in a medium such as a personal blog or a LinkedIn profile. A third-party reviewer of the badge can click on it to learn more about the skills attained and even access the original piece of work that was required to earn the badge.

Badges therefore give learning designers another assessment-type tool, less scary then a formal assessment, and collecting a number of badges can be rolled up into a larger super -badge which could provide a pathway to another form of certification. If a course designer chooses to use badges for motivation and engagement, they need to decide what activities are worthy of a badge. This in itself gives the learner an indication of the importance of the activity and/or the type of skills that will be developed if the learner chooses to earn the badge. Badges can help make the learning design explicit, perhaps helping to bridge the intention of the learning designer and the understanding of what is important for the learner.

Earning badges therefore gives learners opportunities to collect rewards, but this still might not appeal to learners who feel they have nothing really to gain. Using the idea behind loss aversion,  what if users were given a number of badges at the start of a course – partially complete perhaps. If they participate in  learning activities or meet certain targets, they  gain more badges. However, if they do not participate, they  lose badges. I wonder whether the prospect of losing badges might be more of motivator than gaining badges (you can’t lose what you haven’t got yet). This sounds one of those digital pets you have to keep feeding to keep alive :-). What if in order to get through the course, you have to metaphorically keep feeding it? How you do this might have elements of choice, but the opportunities inherent in badges such as expiry dates or the ability to hold more or less information poses some interesting learning design opportunities.


Gibbs, G. (2010) Using Assessment to Support Student Learning, Leeds, Leeds Met Press.

Mozilla, (2012) Open Badges for Lifelong Learning [Online]. Available at https://wiki.mozilla.org/File:OpenBadges-Working-Paper_012312.pdf

Principles of assessment for learning

This post discusses and suggests key principles of Assessment for Learning, part of Block 4 of the OU Master’s module H817 Openness and Innovation in elearning

walking ladder

Assessment for Learning (AfL) is not Assessment of Learning. Assessment of Learning broadly equates to grading, marking and comparing students with each other and is usually done (by learners) at the end of of a course. I suppose that is what most people might understand as ‘assessment’, drawing on their own experience of school and higher education, where the final exams and tests are what really matters, at least in terms of the results.

This week’s readings covered the motivations of the Assessment for Learning movement, which was a response to the assessment of learning approach and drew upon research that AfL can help improve learning outcomes (ARG 1999). Assessment for Learning

is the process of seeking and interpreting evidence for use by learners and their teachers to decide where the learners are in their learning,where they need to go and how best to get there. (ARG 2002)

In an  AfL, assessment is ’embedded in a view of teaching and learning of which it is an essential part’ (ARG 1999) – where learners know what they are aiming for and may take part in self-assessment. Other characteristics include believing that every learner can improve and that the type of feedback learners get empowers learners to understand how to take the next steps in the learning journey.

Some key principles of assessment FOR learning might be that the assessment:

  • Is integrated as part of the learning activity
  • Is formative (informed feedback to learners)
  • Gives learners clear goals
  • Involves learners in self-assessment
  • Is adaptive – teaching adjusts in response to it
  • Motivates and raises the self esteem of learners
  • Enables teachers and learners to reflect on the evidence collection.

The AfL paper was written in 1999, and over a decade later I recognise many of the AfL practices and principles, although they may not necessarily be recognised as ‘assessment’ but as good teaching practices. However, assessment of learning still seems relatively entrenched, although there might be more of a blurring of boundaries between the two. For example a project-based activity may have an end grade but milestones along the way may involve feedback, peer assessment and formative feedback.


Assessment Reform Group (ARG) (1999) Assessment for Learning: Beyond the Black Box [online], http://assessmentreformgroup.files.wordpress.com/2012/01/beyond_blackbox.pdf(accessed 25 June 2013).
Assessment Reform Group (ARG) (2002) Assessment for Learning: 10 Principles [online], http://www.assessment-reform-group.org (accessed 25 June 2013).

Image courtesy of chanpipat / FreeDigitalPhotos.net

H800: Assessment and Learning are (possibly) not the same thing


Image: digitalart / FreeDigitalPhotos.net

I’m a happy bunny this week as the H800 module results came out, and I passed with a Distinction. As I have quite a few more assessments to go before the MA is done, I thought I’d take time out to reflect on what assessments mean. Obviously it’s great to have formal validation that all that reading, reflecting, studying, note taking, discussing paid off and I achieved what I set out to do. But does the assessment really make a huge difference to the actual experience of the course , the actual learning and what I personally gained from it?

When we first started H800, we had to consider what ‘learning’ was. We looked at Anna Sfard’s (1998) metaphors of aquisition and participation, as well as Sian Bayne’s (2005) notion of learning as identity change. ‘Learning’ can of course be all these, so this isn’t really about choosing, but reflecting on what this particular learning experience has meant to me, and what the ‘final’ paper result means.

I certainly ‘acquired’ much new knowledge as a result of the assessments in H800. My ‘specialist’ technologies in the EMA were mobile technologies and Twitter and I certainly feel my core knowledge about these tools and their affordances has a secure foundation. I did a substantial amount of research around these technologies, reading much of the literature around these. However, there was a bewildering array of papers, subjects, topics and themes covered in the H800 syllabus, and of course there is no way of remembering or applying all of many of these to my immediate work. The knowledge that I ‘acquired’ therefore is limited by my actual cognitive capacity and my immediate need to use it (although I feel I can now draw an Activity Theory diagram in my sleep).

In terms of learning as ‘participation’, I certainly feel more capable of participating in a community of educators, of learning technologists, of academics and of elearning practitioners. The norms of community behavior virtually have been honed, as has building a personal learning environment in Twitter especially.

Perhaps for me the greatest surprise has been a kind of identity change and transformation of what I can be, have done and can achieve personally. And this is where the importance of assessments come in. The OU’s continuous assessments (TMAs) and the end of module assessment (EMA), which substitues for an exam, were pretty tough, especially to achieve the higher marks. I did indeed sweat blood and tears researching and thinking, of deciding on my argument, of deciding what would best showcase what I had learned, and how to communicate it. In the midst of this, I suddenly found myself getting ‘aha’ moments when suddenly something clicked. Without the pressure of an assignment requirement, I don’t think I would ever have got to this point and for that I am grateful. Whether it was the combination of pressure, the fear of failure, the need to prove something to myself and justify the time and financial commitment, the personal hurdles that had to be overcome (time management, self-disclipline, referencing skills, asking for help) have helped me learn more about myself as a person, what I want, what makes me happy, what motivates me and where my own strengths and weaknesses lie.

So on reflection, I think that immersive and disruptive assessments do matter. We are all motivated by different things and assessments that might result in a qualification, an accolade, a new career, or a promotion can be seen as ‘strategic’ and in opposition to the notion of learning for its own sake. But they do help in spurring some sort of reaction whether positive or negative and that is itself a learning experience and outcome.


Bayne, S. (2005) ‘Deceit, desire and control: the identities of learners and teachers in cyberspace’ in Land, R. and Bayne, S. (eds) Education in Cyberspace, Abingdon, RoutledgeFalmer. 

Sfard, A. (1998) ‘On two metaphors for learning and the dangers of choosing just one’, Educational Researcher, vol.27, no.2, pp.4–13