• Simon, lots of great thoughts, particularly about the instrumentalisation of education. In the more general sense of the term, this might mean the acquisition of skills, knowledge and understanding to achieve a particular action/goal – which is surely a good thing, whether the goal is self-referential (about the learning and subject itself) or…[Read more]

  • If only there were such a thing – check out new blog post at http://guy75telingstory.wordpress.com/2014/06/20/messaging-3-0-a-case-study-in-off-the-shelf-vs-in-house-technology/

    Having looked at some of the other posts, I would entirely agree with many of the issues raised.

  • ThumbnailA very quick post to say I have just been talking to staff about messaging 3.0, when we as an institution we are not even up to speed with support Facebook and Twitter (assuming they count as Messaging […]

  • Building on the previous discussion about the difficulties of managing projects in HE, here is my contribution related to a project about moving forward with social media/VLE integration.

    Comments welcome as ever – though not sure how the discussion will continue post the end of the course!

    p.s. having included the #ocTEL hashtag in the…[Read more]

  • Julie

    interesting to see your comment about agile (with small or capital A;-) being a less structured approach. Personally, I think Agile can be just as structured an approach as plan driven ones – its just that the structure is simpler and not all nailed down in advance.

    Its also interesting to see that the timeboxing approach used in most…[Read more]

  • As ever, I want a like button for all the positive contributions from Moira, Rose and Simon. I guess I can just favourite the whole conversation!

  • Rose

    great observations as ever. Having worked as a project manager in the IT sector, I find it so much harder to manage in an HE context. This is in part due to the scale of projects I have been involved in, but also the focus and culture as you suggest.

    I hope part of this comes out in my post referenced below.

    guy

  • ThumbnailThis is a very quick post, I promise, reflecting on the experience of leading a project looking ar the role and impact of social media and its value in learning and teaching. As well as describing the project, […]

  • James, great post and nice design.  Like C, being here in ocTEL is a way of reflecting on the online student experience, as well as being a good place to meet informative people and have a chance to think.

    And today’s thought, inspired by Jo’s recent reply (shown in the right activity pane) is how difficult it is to keep up with everything th…[Read more]

  • Peter, Joel, as for the work you folks are describing – I am just so behind the curve on this one.  Where do you get the time to build this stuff?!

    I dabbled with secondLife and thought that the immersive online experience added greatly to the idea of a situated event, but I didn’t have time to get very far.

    While I don’t have a suggestion for i…[Read more]

  • A not very interesting, quite old school exploration of how to move from learner-tutor interaction to a more peer-supported approach. I blame trying to read, think and write on a bus with a phone!

     

  • Please don’t say its the last but one week! So much left to do yet …

  • Check out my experiences of using tech in regular formative tests if you have the time/inclination – its just one type of assessment but shows how applying someone elses principles helps reflection.

    Comments welcome as ever – and sorry I haven’t got around to commenting on other’s topics yet. There are some interesting sounding titles such as Moi…[Read more]

  • It’s a standard kind of things (lots of our) lecturers do. Weekly tests to keep students on their toes and keep them thinking.  In my case, it’s a final year module on web services, with eleven weeks divided into five main topics with fortnightly “objective” tests, delivered to 20-30 students.
    In this post, I want to consider this particular type of assessment and see how the use of technology can impact upon it.
    What
    A small number of multiple-choice questions are used, sometimes in conjunction with code samples, to test basic understanding.  Over the last couple of years the questions have been delivered in a variety of formats including:

    physical, paper based tests in class with emailed feedback
    in-class tests with paired students, with (non-E) voting systems, and immediate feedback
    downloadable question sheets, uploadable answers, emailed feedback
    online MCQ testing with immediate online feedback

    In the first two formats, there is very little E.  The third relies on the VLE for communicating information, while the last is the most typical form of e-assessment, relying on the use of an independent MCQ platform.
    Why Test
    If asked why do regular testing (or when asked – by Octel), my justification or explicit objectives would be based on a subset of something like Chickering and Gamson’s principles such as:

    time on task – giving students something to aim for and to ensure engagement with the basic material
    high expectations – showing students the kinds of questions we would expect them to be able to answer
    prompt feedback – letting students know how well they are doing and whether they need to be changing what/how they are studying

    And routine testing can help meet these objectives.  However, there is a risk that this approach does <not> deepen knowledge and understanding.  Instead it might just direct students into learning for the test – a very superficial approach.
    A almost equally important question to “why test?” would be “why use technology” to support testing? While some may say that tech supported testing offers a richer testing environment (as shown by the use of video to present alternative routes through a real-life scenario), in practice many of my issues around e-testing are more to do with practicalities rather than pedagogy. It is all too easy to embed simple MCQ questions into online material to give an impression of interaction, without doing anything with the information.
    Why Not Test
    So how to avoid the pitfalls of superficial online testing?  Its interesting to use the 12 REAP principles to reflect more on my practice, to make sense of what I have tried and think where else I could go. Although principles 1 (good performance), 2 (time and effort) and 3 (quality feedback) match A-C above and could be viewed as already covered, there is clearly much more thinking that could be done.
    First up, it is possible to argue that the MCQ testing in itself is not a challenging or interesting learning task, something that REAP promotes (principle 2).  Fortunately, in the module under discussion the MCQ testing is not done in isolation.  Alongside the formative testing, there is a parallel stream of (summatively) assessed practical tasks which provides more challenge.  Making clearer links between these tasks and/or synchronizing the timing could reinforce the value of the formative tests and encourage a deeper approach to learning.  More detailed feedback could also provide an opportunity for the testing to impact learning (principles 4 and 5) as measured or guided by the other summative tasks, provided the learner engages in reflection (principle 7).
    The avoidance of a superficial approach can also be addressed by supporting social interaction around the formative testing that promotes peer supported, self-directed collaborative learning.  This is implicit in the classroom approach (II) that uses low tech, “strictly coming dancing” style, colour coded response cards which are shared by pairs of students.  The pairing approach works well in promoting discussion to select the correct answer, and the relatively low number of scorecards provide an easy way of assessing overall performance and providing feedback.  The fact that the feedback is provided in a face to face environment provides more opportunity and encourages for diaglogue (principle 6)
    Challenges and Opportunities of e-Testing
    The different ways of engaging in testing (on-line/off line, open/closed book, synchronous/ asynchronous) emphasise different REAP principles which might find favour with different teachers.  Interestingly, when students are asked which method they favour there appears to be less variation as they consistently prefer option III – the open book, asynchronous, VLE facilitated tests.
    While Involving learners in decision making about assessment practice is one of the REAP principles (9), the preferred student option feels less authentic than the more interactive face to face option (II), or less demanding than the full blown online MCQ with personalised feedback (IV).  However, constraints on time (for option II) or institutional support (for option IV), mean that option III is pragmatically more manageable for the number of students involved.
    Despite the challenges of adopting a more varied testing format, REAP inspired reflection does suggest a number of refinements to the testing process, in particular for entirely online students.  One way to increase student reflection, dialogue and the development of learning groups (principle 10) might be to start with individual tests, using the results to select mix-ability learning groups.  The groups could then be tasked with debating and submitting just a single set of agreed answers for each group.  If gameification is seen as a motivating factor, group results could be published via a leaderboard.
    The role of technology in testing
    In thinking about testing, my first question was where is the “e-“ in this type of assessment.  Or more importantly, what makes it an e-assessment?  And by the way, does being an e-assessment mean it is not possible to undertake it without any technology support?  But on reflection, the lines are blurred and the why is clearly more important that the how.  Technology shouldn’t be the deciding factor in deciding whether we want to do paired or group testing – but it sure helps scale things up from 30 students to 130.
    And thinking about technology as an enabler makes it possible to think about re-engineering other assessment opportunities. Rather than just relying on students commenting on other people’s project suggestions in a forum, why not build a more structured online peer review element into the proposal stage … now there’s a (not very novel) idEa!

  • Having looked at Hill et al, I really didn’t like this. The models I felt were poor and the results not helpful.

    For me, technology or template selection comes way down the list after clearly articulated and motivated learning design. The way pedagogy and learning activities were muddled meant I would struggle to know how to use discussion…[Read more]

  • Glynn

    Interesting to see you focussed on sense of community and participation. For me, these are proxy measures for “Is this easy to use”, and “is it useful” – however you define the latter! For games, the useful bit might be, is this useful in wasting my time in an enjoyable way/helping destress me/… 😉

    guy

  • ThumbnailI was very interested to see technology selection as a topic for discussion in ocTEL a couple of weeks back.  Its taken me a while to catch up and in some ways I wish I hadn’t.
    The aim is to think about […]

  • I did the TEL one on learner perspectives after having tried the OER task 3.5

    You can therefore find my (very short – for other reasons) contribution on resource effectiveness here

  • Argggghhhhhh!

    I have just lost the best part of an hour completing the TEL one task here, having been sucked in to it by checking out the Khan academy to see if they had resources for the activity documented above.  And having written a whole bunch about it, I closed the wrong tab and lost everything 🙁

    Quick summary from memory then:

    Khan i…[Read more]

  • Load More