Reflections on the first week of the FutureEd MOOC

Image
I’m participating in the History and Future of (Mostly) Higher Education MOOC, on the Coursera platform, run by Cathy Davidson from Duke University. The subject of this MOOC, billed as looking at the Future of Higher Education, was interesting enough to a number of colleagues at the Centre for Innovation in Learning and Teaching at the University of Cape Town and hence we have formed a study group to meet once a week to talk about our experiences and to see how the experience of taking a MOOC from an American University might be made relevant to a higher education institution in South Africa.

Having a local study group has been motivating. By the middle of the first week, I had already watched the videos, annotated the transcripts with notes, read some of the articles, engaged a little on Twitter and felt reasonably prepared to disuss whether what the MOOC offers thus far is something that I and other colleagues can make use of in our context.

Thoughts on the MOOC Content

I’ve enjoyed the first week’s presentation of content and ideas, which seemed to set a background for framing the development of higher education in the light of what Davidson calls the four ‘information ages’ and to make links between what changed in these information ages (writing, moveable-type, mass printing leading to distribution of cheap literature and the internet) and that these changes led to changes in the social relationships and communications between people, leading to new ways of communicating and interacting.Davidson’s point seems to be that such changes follow a pattern of what might be termed ‘moral panics’. The availability of information in forms that led to social changes produced reactions that might seem to us now as patently absurd. For example, in the first information age, Socrates thought that writing would lead to a deterioration in the dialogic process, while the consumption (by women) of low-brow novels made possible by mass printing was considered  to making oneself open to sexual predators, as well as challenging structures of authority and convention. The implication is that the concerns brought about about by what Davidson calls the fourth information age, that started in April 1993 with availability of the internet and the word wide web, might be regarded by future generations as equally absurd.

Having framed the history through the lens of the four information ages, Davidson considers the types of new literacies required to learn in these ages and which education should be supporting and promoting. These include skills of privacy, security and understanding intellectual property that are necessary for people to understand, protect and negotiate. Other skills such as the ability to collaborate, establish credibility through ‘crap detection’ and being globally conscious are other ’21st Century’ literacies. Thus far, the course has set the stage and pointed to the types of discussions that are likely in future weeks, centred around imagining the Higher Education of the Future.

The study group’s thoughts

The study group members had a thoughtful discussion about their participation in the first week. The content itself was considered to be interesting, although at times simplistic, perhaps geared towards an undergraduate curriculum textbook level and to have predominantly US-centred examples. Those new to MOOCs found the Coursera platform rather busy and a little clunky although more experienced MOOCers shared tips such as using transcripts to get the gist of the videos. The first week’s quiz caused some bemusement, as it was seen as either so easy as to be a bit pointless, or that all the answers seemed to be right (Tip: ‘answer all the above’ to most of the quiz questions)! More interesting discussions centred around the motivation of this particular MOOC. Who is the MOOC really for? Can global participants, such as those in Africa, really contribute and influence the learning of others, as the course is billed to achieve. Or is it about taking what is offered (for free) and make the course relevant (localise) to particular contexts? As seen in this Google Hangout, Cathy Davidson’s face to face class at Duke appear to be studying this MOOC as part of their own course, and these students seem to be engaging with MOOC participants as a way of finding out more about MOOCs, almost in a strategic way. Are MOOC participants guinea pigs or test subjects for another more exclusive course? Furthermore,  this interesting blog post argues how global MOOC participants might help internationalize American higher education, which could be seen as a seemingly happy by-product of the global nature of MOOCs but also as a strategic view of the ‘benefits’ a global cohort might bring to a particular group.

So for me, the real value of the MOOC is the discussions it has engendered, which is what its aims appear to be. Members of the study group are sufficiently intrigued to continue with the MOOC and look forward, in particular, to seeing how the peer review assessments will work.

“Image courtesy of Sira Anamwong / FreeDigitalPhotos.net”

MOOCs: is your glass half empty or half full?

drink

If there were a book called ‘MOOCs: A history’ it would need an update. For anyone following MOOCs, this past week’s big news has been MOOC provider Udacity and its CEO Sebastian Thrun’s decision to effectively stop being a MOOC provider and instead offer what looks like corporate elearning training and/or a standard online courses. As someone who is currently enrolled in a Udacity statistics course, I received an email inviting me to a whole new course experience designed to help me succeed where I would have access to  coaches to give personalised feedback and where I’d work on projects to integrate what I have learned. As to the status of its previous (free) courses, here is what Udacity has to say (emphasis mine): ‘Our courseware remains available for free. You can still watch all the videos, take all the quizzes, and sweat through the programming exercises on your own. But you might be missing out on a truly transformational learning experience. And you won’t earn the certificates that are recognized by industry.’ So it’s very clear that the free MOOC experience is definitely inferior to a (paid-for) transformational learning experience, and the (free) Statistics course I am currently enrolled in now no longer a course but… er… ‘courseware’.

Unsurprisingly has been a torrent of blog posts and Twitter chatter about what this means for MOOCs and the MOOC movement. Does this signify the death of the MOOC, or is this a mere failure of Udacity and Sebastian Thrun as George Siemens writes? Or is this really about what happens when you apply the Venture Capital model to education – that inevitably at some point, you need to make it pay and in Udacity’s case this means having to charge students. Audrey Watters points out that Udacity’s new development effectively decides what sort of students are desirable, and it’s not the ones for whom the open education movement was designed to help (those who need it most). A more charitable interpretation may be to see Udacity as a company experimenting with open and online learning and needing to find a sustainable business model. Udacity’s own rationale, as in this somewhat bizarre interview with Sebastian Thrun points to the company finding through data that serving content no matter how engagingly does not equate to learning for particular constituencies of students; in effect Thrun calls Udacity’s original offerings ‘lousy’ and states he never meant to disrupt the traditional four-year liberal arts college-style education anyway. So for Udacity perhaps the MOOC is dead, at least the ‘massive’ part and the ‘open’ part.

Udacity isn’t the only organisation to ‘realise’ this. Harvard is among a group of elite universities in the EdX tier looking at offering SPOCs (small private online classes) having ‘discovered’ that smaller classes allow for more successful teaching, assessment and accreditation, in ways MOOCs can’t. As Martin Weller points out in this blog post and as Tony Bates argues if any of these companies (and elite universities) had looked at the evidence and experiences of Online and Distance Education, perhaps some of these painful experiments might have been avoided.

But are we really back to square one? Does (successful) online learning always require smaller supported (and expensive) classes? Is the promise of offering education on a massive scale at little or no cost one that inevitably leads to a poorer level of education of those students? Certainly Udacity’s oft-cited experiment of offering foundational courses to San Jose students would seem to support this premise: the students who took Udacity courses achieved lower pass rates than students who had attended face to face classes (one interesting caveat is that these were students who might not have been able to attend face to face classes). And the research around MOOCs thus far clearly shows that more successful MOOC participants seems to be those who already have degrees and for whom the motivation is professional development, suggesting that that those perhaps in most need of education are least equipped to take advantage of the free learning available in MOOCs. So MOOCs have perhaps confirmed what we already knew, but I think it might be more fruitful to acknowledge this as a Massive Open Research Study which has validated some of the experiences and results of those working in online and distance education, rather than bemoaning (albeit with some justification) the fact that we (well some people) already knew about this and there is little to learn from the current xMOOCs.

After all, this is  a period of experimentation and arguably, if the (x)MOOC providers hadn’t laid their offerings on the table, pedagogy flaws and all, and invited (literally) the world to join in and participate, we probably wouldn’t have the validation of online and distance learning or the appetite for experimenting – a development has moved some campus-based universities to get excited about the potential of online learning. It’s an uncomfortable truth perhaps that it has taken elite universities to effect this shift (just as it was MIT who arguably ‘led’ the way with Open Educational Resources). Indeed there are some useful courses among xMOOC providers that can assist students with getting to grips with a field of knowledge and acting as a ‘networked textbook’ as Dave Cormier suggests and others which are going beyond the so-called behaviourist pedagogy ascribed to xMOOCs. (And frankly, Udacity was always quite different from Coursera or EdX (and FutureLearn) in that it was always geared towards a more workplace based approach and focussed primarily on technical skills).

So on the question of where now for MOOCS, I’d be tempted to see it as a ‘glass half full’ – lessons are being learned and opportunity beckons. Enabled by digital, networked and (sometimes) open technologies, MOOCs are only a part of broader changes in the way learning might be designed, delivered and assessed . The earlier MOOCs emanating from Canadian universities espoused a different pedagogical approach, which perhaps have greater potential for applying innovative pedagogy to online learning that could change the way people learn. MOOCs such as Connectivism and Connective Learning (CCK) and Personal Learning Environments and Knowledge Networks Online Course (PLENK) offered learning experiences which were (relatively) massive in scale, and applied connectivist principles where the course and curriculum were created and supported by the community of learners; these experiments continue to grow as the success of DS106 attests, but it’s also clear they don’t suit all types of students. Furthermore, the OER movement is still alive and kicking and with the launch of the OER University entering a new phase: the desire and commitment to provide meaningful education to those who need it, especially in developing countries.

We may have expected too much of MOOCs – it’s time to put them in their place.

Image courtesy of Free Digital Photos / FreeDigitalPhotos.net

Engagement and motivation through badges

Tamagotchi by stopsign, on Flickr
Creative Commons Attribution-Noncommercial-Share Alike 2.0 Generic License  by  stopsign 

Feed me!

When considering what motivates students to participate in a learning intervention, learning designers spend a lot of time designing activities to engage and interest students. However, research indicates that students tend to respond to the assessment regime rather than the learning objectives. This means that students will look at what is required to pass the course (the hidden curriculum) and then make sure that their activities focus on what is required to fulfil the requirements of that particular task (Gibbs, 2010).

I have found this when working in a corporate e-learning environment, where the learners’ objective is to get through the multiple choice quiz as to be deemed competent, so the approach is to quickly click through screens of an elearning module (or in some cases ask a colleague to do it). In my Master’s course some students have never appeared in any online discussion forum or participatory activity as it is possible to pass the module by turning in the written assignment (which I presume they did). This is understandable and a sensible strategic approach to gaining a qualification or fulfilling the needs of a workplace learning scheme. Is it learning? Well, it’s really impossible to say what the participant has or hasn’t learned but they have not ostensibly participated in the learning process that has been designed by the instructional designer or faculty.By process, I mean the pathway comprising the range of activities that are designed to help the learning and practice of skills and competencies that will help the learning objectives – in short the learning design. For example, an activity to  stimulate dialogue and build peer review skills might ask students to read a paper on a subject and post a summary and response in a blog post, inviting comments.

Non-participation in activities become more visible in an online or blended environment as students dn (or do not not) leave digital footprints. Yet,  as these activities are ‘just learning’ and have no bearing on the assessment (typically an end of module exam or paper), then a significant proportion of students don’t bother. This can diminish the effectiveness of online courses which often are designed for and depend on learner-learner interaction.

If it is assessment rather than the learning activities that motivate (or spur) the larger proportion of students to participate in the learning process, a close alignment of the learning and assessment process could motivate more students to participate. This is in itself not a new concept: good teachers have always taught in a way that aligns the learning that takes place in a classroom with the assessment. This could mean that artefacts that are produced as a result of learning activities are included as part of or necessary to the assessment itself, or that assessment is ongoing requiring frequent participation and engagement. In formal online learning and in MOOCs this poses a challenge due to the distributed and generally asynchronous nature of online learning, and is used with care by learning designers as it then affects the flexibility aspect of online learning. After all it is this type of learning that attracts  the type of person who might not be able to study in any other way (formal campus based learning requiring the necessity of being in one place at a certain time). Working professionals or people with caregiver responsibilities may appreciate the value to flexibility that online learning enables.

What methods or strategies might spur engagement or at least encourage learners to participate in the learning process as the learning designers intended? I wrote about Open Badges for my final (possibly ever) assignment for the Open University Masters in Online and Distance Education. Open Badges are digital artefacts that learning designers can use in a course to recognise achievements or reward participation at a lower level of granularity than traditional assessment and so might be able to motivate learners due to the element of immediate gratification or gaming.  Learners can earn badges for specific tasks within a course that acknowledge a skill or completed activity, thus building and visibly showing progress and gaining something tangible they can show on a profile. Open Badges are embedded with meta data and can be displayed by the earner in a medium such as a personal blog or a LinkedIn profile. A third-party reviewer of the badge can click on it to learn more about the skills attained and even access the original piece of work that was required to earn the badge.

Badges therefore give learning designers another assessment-type tool, less scary then a formal assessment, and collecting a number of badges can be rolled up into a larger super -badge which could provide a pathway to another form of certification. If a course designer chooses to use badges for motivation and engagement, they need to decide what activities are worthy of a badge. This in itself gives the learner an indication of the importance of the activity and/or the type of skills that will be developed if the learner chooses to earn the badge. Badges can help make the learning design explicit, perhaps helping to bridge the intention of the learning designer and the understanding of what is important for the learner.

Earning badges therefore gives learners opportunities to collect rewards, but this still might not appeal to learners who feel they have nothing really to gain. Using the idea behind loss aversion,  what if users were given a number of badges at the start of a course – partially complete perhaps. If they participate in  learning activities or meet certain targets, they  gain more badges. However, if they do not participate, they  lose badges. I wonder whether the prospect of losing badges might be more of motivator than gaining badges (you can’t lose what you haven’t got yet). This sounds one of those digital pets you have to keep feeding to keep alive🙂. What if in order to get through the course, you have to metaphorically keep feeding it? How you do this might have elements of choice, but the opportunities inherent in badges such as expiry dates or the ability to hold more or less information poses some interesting learning design opportunities.

References

Gibbs, G. (2010) Using Assessment to Support Student Learning, Leeds, Leeds Met Press.

Mozilla, (2012) Open Badges for Lifelong Learning [Online]. Available at https://wiki.mozilla.org/File:OpenBadges-Working-Paper_012312.pdf

H817 – surviving with style

I submitted my final paper for H817 Openness and Innovation in elearning yesterday with some relief. It’s been a long module starting in January, and I’ll reflect more on what I have learned in some blog posts to come. H817 feels like the most challenging module so far on the path to the Master’s in Online and Distance Education, and so fitting that it is the last one for me. But this post is just to acknowledge the end of what has been a transformative experience over the last 2 1/2 years. Studying at a distance could have been isolating and lonely, but it has been anything but. I’ll remember and apply a lot of the theories and practices, but I’ll also remember the moments of interaction, of peer support, sharing the ‘joy’ and pain of collaborative group work and the collective stresses of writing assessments with ever diminishing word count allowances. This was a block about innovation, and so it is fitting that in a fit of mad posting on the secret student Facebook Group at about midnight last night the idea for a badge was born, and thanks to fellow student David McDade made into a reality. So I display my H817 Survivors with Style Badge with pride and with gratitude to fellow students who have made the journey meaningful.

Image

 

Principles of assessment for learning

This post discusses and suggests key principles of Assessment for Learning, part of Block 4 of the OU Master’s module H817 Openness and Innovation in elearning

walking ladder

Assessment for Learning (AfL) is not Assessment of Learning. Assessment of Learning broadly equates to grading, marking and comparing students with each other and is usually done (by learners) at the end of of a course. I suppose that is what most people might understand as ‘assessment’, drawing on their own experience of school and higher education, where the final exams and tests are what really matters, at least in terms of the results.

This week’s readings covered the motivations of the Assessment for Learning movement, which was a response to the assessment of learning approach and drew upon research that AfL can help improve learning outcomes (ARG 1999). Assessment for Learning

is the process of seeking and interpreting evidence for use by learners and their teachers to decide where the learners are in their learning,where they need to go and how best to get there. (ARG 2002)

In an  AfL, assessment is ’embedded in a view of teaching and learning of which it is an essential part’ (ARG 1999) – where learners know what they are aiming for and may take part in self-assessment. Other characteristics include believing that every learner can improve and that the type of feedback learners get empowers learners to understand how to take the next steps in the learning journey.

Some key principles of assessment FOR learning might be that the assessment:

  • Is integrated as part of the learning activity
  • Is formative (informed feedback to learners)
  • Gives learners clear goals
  • Involves learners in self-assessment
  • Is adaptive – teaching adjusts in response to it
  • Motivates and raises the self esteem of learners
  • Enables teachers and learners to reflect on the evidence collection.

The AfL paper was written in 1999, and over a decade later I recognise many of the AfL practices and principles, although they may not necessarily be recognised as ‘assessment’ but as good teaching practices. However, assessment of learning still seems relatively entrenched, although there might be more of a blurring of boundaries between the two. For example a project-based activity may have an end grade but milestones along the way may involve feedback, peer assessment and formative feedback.

References

Assessment Reform Group (ARG) (1999) Assessment for Learning: Beyond the Black Box [online], http://assessmentreformgroup.files.wordpress.com/2012/01/beyond_blackbox.pdf(accessed 25 June 2013).
Assessment Reform Group (ARG) (2002) Assessment for Learning: 10 Principles [online], http://www.assessment-reform-group.org (accessed 25 June 2013).

Image courtesy of chanpipat / FreeDigitalPhotos.net

Storyboards and prototypes

This post reflects on constructing a storyboard  and a prototype for a mobile learning outdoors activity as part of the Learning Design Studio for H817 Openness and Innovation in elearning

Imagining – the storyboard
The storyboard is a visual mapping out of the proposed learning activity. Although we were working in a group, we created individual storyboards first. My storyboard grew out of the work done at the research phase, and I also used the personas’ forces to inform the design of activities. I created mine in Linolit, which worked well for the creation but was not easy to output into a format that could be embedded in a website or document, so in the end I took a photo of the screen.

Sukaina_storyboard_pic copy

Storyboard (click for a larger image)
My storyboard spanned three phases and detailed a number of activities. While I was developing this, it became clear that I was designing for two audiences: school groups and day visitors but that the school trips would need activities that spanned beyond the site visit. I therefore developed an approach that had classroom activities either side of the site visit, but that the activities at the site visit could also be done stand-alone using resources available either at the Visitor Centre or through the website. During this time, it was the case studies that I had analysed that provided the greatest influence although I also referred to the personas when thinking would this persona be interested in this activity?

Two team members completed storyboards and three members discussed the storyboards in Google Hangouts and agreed to synthesise the two boards using the Google template provided as there were considerable similarities, adopting the overall approach of a learning activity that spanned classroom and on-site learning. Following this, one team member agreed to construct a team storyboard which would be approved by the whole team in another collaborative meeting. This stage also saw a narrowing of the scope to just myths and legends, rather than a more general all-encompassing approach that might have covered other subjects. The team discussion was a strength here as the team members were able to use the visual aspects of the storyboard to discuss options and differences. The advantage of the collaborative storyboard was its visual display of the learning activities; we could see which activities were cluttered and unclear, and we could move them around, to see what we were expecting participants to do. I could see that the process was important for helping the activity form in our imaginations; for example we placed activities for a given stage in a non-linear way, which intuitively showed that the activities did not need to follow a prescriptive stage and were more fluid.

The storyboard was not without its problems though as in order to fit text into boxes and even into the overall canvas, we may have oversimplified some things and the level of granularity needed for description was not always clear. The storyboard worked at an activity level, but I am not sure how it would work at a course level with the level of collaboration and discussion that was required.

Making it real – creating a prototype

Before we could build a prototype, the team had to decide what features to prototype. The mechanism for this was in developing a features table from the storyboard, extracting features from the storyboard. This provided to be a useful activity, not only for deciding which features were important to prototype but also served as a reality check of whether the activities on the storyboard made sense. The features table comprised a list of items broken down into discrete components such as creating the text instructions for a page, the actual page (say a web page), any media elements associated with that page, and any functions that might need to be built into a web page. There was initial confusion about the task  as it required some understanding of what was meant by a ‘feature’ and also re-translating the original instruction of organising by ‘scene’ (as in a movie or series of screens?) to something that made sense to our project. For this, I split our activity into 4 scenes: scene 1 was activity in the classroom prior to the site visit, scene 2a was activities at the Visitors’ Centre, scene 3 was Activities in the outdoors at the Giant’s Causeway and scene 4 was post-visit activities to create the artifact. Once this organising principle was in place, it became easier to extract the features.

Creating a prototype for an outdoors activity for a mobile learning app was a challenge as we did not have a realistic chance of actually building an app, and we were working in a virtually distributed team. However, putting myself in the learners’ position helps prompted me to suggest that we could develop the website that would serve as the ‘home’ for the activity – where users could go and download the app and the brochure, where users could go to see what stories had been told, where teachers could go and find more information. This at least would serve as an authentic experience for someone deciding whether to partake in the activity, planning the field trip or who wanted to submit their story.

Further in order to prototype the app functionality, we agreed to create PowerPoint presentations of the steps the user would go through to see the augmented reality and the other features about the app.  There was no time to actually develop the app but we were able to create one example of augmented reality which was recorded and uploaded onto the site.

I quickly created this site using Google sites adding holding text into the relevant sections, and then another team member made the powerpoint presentations that demonstrated the some of the screens of the app, showing what functions were available.

prototype

Screenshot from prototype website showing mobile app interface mockup

Constructing the prototype and seeing parts of the possible product come to life was a motivating moment as there was something tangible beyond the discussions, ideas and the storyboard.

Prototypes such as this have their limitations. In this case, the heart of the activity is the mobile app, but without the time or resources to build the app, we could only demonstrate in a presentation how some screens and features of the app might work. This is a limited way to assess whether a mobile learning activity is going to function in the context, where connectivity, speed as well as other environmental variables might be a factor. These aspects might not be picked up in a prototype, and I’d argue for a mobile app, that a beta version would be required for adequate testing in the field.

However, the prototype we built was useful to communicate what the activity might involve for learners and field trip organisers and would be useful for a situation to establish a proof of concept or to get buy-in.  Simple mockups can prevent expensive programming that has to be discarded or (possibly worse) continuing with a problematic product because it is too late or expensive to make changes.

Researching the design challenge

This post reflects on the research conducted at the Inspiration and Ideation stage of the Learning Design Studio, which is part of the OU module H817 Openness and Innovation in elearning.

The Inspiration and Ideation stage of Learning Design was to review case studies and theoretical frameworks to develop design patterns and principles. The purpose of this was address the design challenge based on evidence of what might have been done before and from case studies of learning designs in similar contexts.

The challenge

In my previous blog post, I described the process of articulating the context. The context refers to the design space and the concerns of the learners and is what the designers have to work with. The challenge is the change that is desirable to be effected in that context. For our project, the challenge was: 

to design a mobile learning application that will facilitate learning through the exploration of the UNESCO world heritage site the Giant’s Causeway, Northern Ireland. The project is geared towards young people from the ages of 10-14, but will also be applicable to overseas tourists and students. The challenge is to engage the learners while at the site as part of a broader seamless learning approach, where the learning can be consolidated when back in the classroom. Typically, the learners will visit the site as part of a field trip, and the challenge is to use mobile and social devices to support the learning goals around understanding and appreciating the historiography of the site including discussion around legend, myths and how people in history have interpreted the site and how it came into being.

It was important to articulate the challenge so that the type of research conducted would look for a similar contexts such as field trips, local history, learning in outdoors environments (as opposed to museums) and use of mobile devices.

Reviewing the evidence

I chose two case studies which seemed to align with the context and the challenge. This was an enjoyable and interesting  activity that increased my own understanding of mobile learning ‘in the wild’, approaches to learning about history using augmented reality and geo-location, and group interactions in informal learning contexts with mobile devices.

  • The first case study was a comparison of offering audio tours around Nottingham, UK (Fitzgerald et al., 2013) which compared a people-led tour (with human guides narrating scripted audio) and a technology-led tour (with smartphones where GPS activated audio at selected points) on the subject of the issues around the interpretation of the 1831 Reform riot.
  • The second was an account of designing field trip activities for primary level students visiting the Chinatown area of Singapore using mobile devices (So et al., 2009), where the learning outcomes were to facilitate knowledge building in an outdoor location. This study demonstrated how the design evolved from an instructivist approach of consuming information at points in the field trip to one where students used Google Maps to track their own journey, making and sharing  notes and observations with peers and culminating in a review of the artefacts students made.

For each case study, I wrote a design narrative summarising the key points and lessons that might be derived for designing mobile learning in the field. Another team member contributed case studies and narrative, so the team had a body of evidence from which to derive design patterns and principles.

The Learning Design process instructed the team to write design patterns and principles derived from the insights gained from analysing the case studies in order to ‘formulate [the] insights into “building blocks” for design’ (OU module materials, Mor, 2013).

Despite the existence of a template, deriving the patterns was challenging partly because I wasn’t sure what level of granularity or generality to aim for in the patterns. Using the design narratives as the source, I devised a pattern about promoting critical thinking and higher order skills when on a field trip through clear learning goals (preferably learner derived) as well as specific activities.

In addition to the case studies, I also chose a Theoretical Framework for Mobile Learning (Herrington et al, 2009), which informed writing a principle of using the learners’ own devices and one about using mobile devices to produce as well as consume knowledge. These were uploaded to the group’s project site for review and discussion.

The case studies, design narratives, patterns and principles helped to develop some key design directions which the team could agree on. The possible tensions between mobile devices as distractions in an outdoors setting and the possible negative effects of group cohesion was something we had already identified in the personas’s factors and concerns and which surfaced in the patterns and principles. so we knew we’d need to take account of this in the design. The opportunity of using the affordances of mobile devices for enhancing learning in contextualised situations and at outdoors sites was an exciting prospect to inform the design, especially where co-creation of content and development of an artefact could be part of the activity.   Thus, the process also informed what we would not do, in this case only deliver media to users on mobiles at points at the site, and we would not make the activity linear and prescriptive. These insights helped when it came to developing the storyboard.

Reflections

Reviewing case studies of similar contexts and challenges and finding theoretical frameworks that can inform a design is something that I’d do instinctively, especially if it is a context I am unfamiliar with. This activity took a more structured approach to reviewing  case studies and theoretical frameworks with the additional instruction to derive patterns and principles. The idea is to have a bank of patterns and principles that a learning designer can call upon. Writing a pattern was unfamiliar  and it took some time to conceptually understand not only the purpose but also what a pattern looked like. Was it a recipe? Was it a FAQs? Was it a do’s and don’ts? What is the level of detail? Who is the pattern for? How makes a good pattern? Are patterns peer reviewed?

Looking at some other patterns was useful, but many of the examples we were pointed to seemed incomplete. This is one of the activities I would like to go back to, understand better and consolidate, as although I derived a number of patterns and principles for our project, I am not sure that in a future real-world project,  I would go out and seek other patterns (where would I go? Wouldn’t it be better to read the actual case study or design narrative?), nor would pattern writing be something I would necessarily include in a learning design process I was managing, where time becomes a constraining factor.

One final thought about the importance of evidence-based design is that it brings a level of objectivity to the process.  While this was not the case in our team, developing narratives, patterns and principles could also help diffuse tensions and disagreements in teams or with stakeholders holding differing views about a design direction, by using the objective (but still interpreted) evidence to inform design.

References

FitzGerald, Elizabeth; Taylor, Claire and Craven, Michael (2013). To the Castle! A comparison of two audio guides to enable public discovery of historical events. Personal and Ubiquitous Computing, 17(4) pp. 749–760. Available at: http://oro.open.ac.uk/35077/

Herrington, A., Herrington, J. and Mantei, J. (2009) ‘Design principles for mobile learning’ in Herrington, J., Herrington, A., Mantei, J., Olney, I. and Ferry, B. (eds) New Technologies, New Pedagogies: Mobile Learning in Higher Education, Sydney, Faculty of Education, University of Wollongong, pp. 129–38.  Available  at http://ro.uow.edu.au/ edupapers/ 88/

So, H.J., Seow, P. & Looi, C.K. (2009). Location Matters: Leveraging Knowledge Building with Mobile Devices and Web 2.0 Technology. Interactive Learning Environments, 17(4), 367-382. Available at: http://www.editlib.org/p/64762/