Where’s the E in assessment?

June 9, 2014 in Blog post, Reader

It’s a standard kind of things (lots of our) lecturers do. Weekly tests to keep students on their toes and keep them thinking.  In my case, it’s a final year module on web services, with eleven weeks divided into five main topics with fortnightly “objective” tests, delivered to 20-30 students.

In this post, I want to consider this particular type of assessment and see how the use of technology can impact upon it.

What

A small number of multiple-choice questions are used, sometimes in conjunction with code samples, to test basic understanding.  Over the last couple of years the questions have been delivered in a variety of formats including:

  1. physical, paper based tests in class with emailed feedback
  2. in-class tests with paired students, with (non-E) voting systems, and immediate feedback
  3. downloadable question sheets, uploadable answers, emailed feedback
  4. online MCQ testing with immediate online feedback

In the first two formats, there is very little E.  The third relies on the VLE for communicating information, while the last is the most typical form of e-assessment, relying on the use of an independent MCQ platform.

Why Test

If asked why do regular testing (or when asked – by Octel), my justification or explicit objectives would be based on a subset of something like Chickering and Gamson’s principles such as:

  1. time on task – giving students something to aim for and to ensure engagement with the basic material
  2. high expectations – showing students the kinds of questions we would expect them to be able to answer
  3. prompt feedback – letting students know how well they are doing and whether they need to be changing what/how they are studying

And routine testing can help meet these objectives.  However, there is a risk that this approach does <not> deepen knowledge and understanding.  Instead it might just direct students into learning for the test – a very superficial approach.

A almost equally important question to “why test?” would be “why use technology” to support testing? While some may say that tech supported testing offers a richer testing environment (as shown by the use of video to present alternative routes through a real-life scenario), in practice many of my issues around e-testing are more to do with practicalities rather than pedagogy. It is all too easy to embed simple MCQ questions into online material to give an impression of interaction, without doing anything with the information.

Why Not Test

So how to avoid the pitfalls of superficial online testing?  Its interesting to use the 12 REAP principles to reflect more on my practice, to make sense of what I have tried and think where else I could go. Although principles 1 (good performance), 2 (time and effort) and 3 (quality feedback) match A-C above and could be viewed as already covered, there is clearly much more thinking that could be done.

First up, it is possible to argue that the MCQ testing in itself is not a challenging or interesting learning task, something that REAP promotes (principle 2).  Fortunately, in the module under discussion the MCQ testing is not done in isolation.  Alongside the formative testing, there is a parallel stream of (summatively) assessed practical tasks which provides more challenge.  Making clearer links between these tasks and/or synchronizing the timing could reinforce the value of the formative tests and encourage a deeper approach to learning.  More detailed feedback could also provide an opportunity for the testing to impact learning (principles 4 and 5) as measured or guided by the other summative tasks, provided the learner engages in reflection (principle 7).

The avoidance of a superficial approach can also be addressed by supporting social interaction around the formative testing that promotes peer supported, self-directed collaborative learning.  This is implicit in the classroom approach (II) that uses low tech, “strictly coming dancing” style, colour coded response cards which are shared by pairs of students.  The pairing approach works well in promoting discussion to select the correct answer, and the relatively low number of scorecards provide an easy way of assessing overall performance and providing feedback.  The fact that the feedback is provided in a face to face environment provides more opportunity and encourages for diaglogue (principle 6)

Challenges and Opportunities of e-Testing

The different ways of engaging in testing (on-line/off line, open/closed book, synchronous/ asynchronous) emphasise different REAP principles which might find favour with different teachers.  Interestingly, when students are asked which method they favour there appears to be less variation as they consistently prefer option III – the open book, asynchronous, VLE facilitated tests.

While Involving learners in decision making about assessment practice is one of the REAP principles (9), the preferred student option feels less authentic than the more interactive face to face option (II), or less demanding than the full blown online MCQ with personalised feedback (IV).  However, constraints on time (for option II) or institutional support (for option IV), mean that option III is pragmatically more manageable for the number of students involved.

Despite the challenges of adopting a more varied testing format, REAP inspired reflection does suggest a number of refinements to the testing process, in particular for entirely online students.  One way to increase student reflection, dialogue and the development of learning groups (principle 10) might be to start with individual tests, using the results to select mix-ability learning groups.  The groups could then be tasked with debating and submitting just a single set of agreed answers for each group.  If gameification is seen as a motivating factor, group results could be published via a leaderboard.

The role of technology in testing

In thinking about testing, my first question was where is the “e-“ in this type of assessment.  Or more importantly, what makes it an e-assessment?  And by the way, does being an e-assessment mean it is not possible to undertake it without any technology support?  But on reflection, the lines are blurred and the why is clearly more important that the how.  Technology shouldn’t be the deciding factor in deciding whether we want to do paired or group testing – but it sure helps scale things up from 30 students to 130.

And thinking about technology as an enabler makes it possible to think about re-engineering other assessment opportunities. Rather than just relying on students commenting on other people’s project suggestions in a forum, why not build a more structured online peer review element into the proposal stage … now there’s a (not very novel) idEa!


1 response to The open course you cannot fail…

  1. Dear Maren

    “Lurkers” vs “Silent participants”?

    Here are a few other terms that could be used:

    vicarious learners?
    silent participants?
    viewer?
    Non-public user?
    legitimate peripheral participator?
    eyeballers?
    virtual participant?
    marginal participant?
    onlooker?
    passive observer?
    cognitive apprentices?
    potential member?
    proximate member?
    sympathiser?
    supporter?
    listener?
    tacit member?

    See Let’s get more positive about the term ‘lurker’

    http://www.groups-that-work.com/GTWedit/GTW/lurkerprojectcopworkshopspring03rev.pdf

    Which term best reflects the degree/ style/ of learning? If you read a book, but never talk about it, have you learned any less?

    Best wishes

    Charlotte

Skip to toolbar