Experiences or expectations of e-assessment and e-feedback

This topic contains 23 replies, has 13 voices, and was last updated by  jojacob_uk 6 years, 1 month ago.

  • Author
  • #21915


    Post a message about your experiences or expectations of e-assessment and e-feedback to support student learning. For example,

    • Why did/would you choose a particular type of e-assessment? Describe why you think it is effective and how it can help deepen knowledge and understanding.
    • In your experience, what type of approach creates an environment conducive to self-directed learning, peer support and collaborative learning? How might technology help?
    • What opportunities and challenges does this approach present to tutors?

    If you are new to designing and implementing e-assessment and e-feedback, you may find it useful to begin by reading pages 1-17 of the Jisc publication Effective Assessment in a Digital Age (pdf). You are also encouraged to draw on your own experiences, either as a tutor or as a learner, or both.

  • #22016


    Been carrying a life size trial of Turnitin to see how the whole marking process can go – at undergraduate level – from first marking/feedback to moderation by the course organiser (CO).

    The advantage of the process seems to be that the feedback can be released to students before the moderated mark can, hence providing a prompt feedback experience to students.

    However, the CO has commented that the moderation process is very awkward in the electronic version because there is not the flexibility of browsing through the hard copies of assignment and the whole process takes much longer as a result. Any tip/experience on that part of the process please?


    • #22235

      Tom Franklin

      The first thing I would say is that different technologies (including paper) have different affordances, so that you will lose some things that you gained from paper and gain a variety of different ones. It also means that a certain amount of effort will be required to get used to a slightly different way of marking.

      In Turnitin it is possible to download the scripts for offline marking if you are using the ipad app, and that might make browsing through all the scripts easier. Equally different scripts can be opened in different windows so that they would then just have to flip through the various windows (or tabs).

      Don’t know if that helps at all.

  • #22080


    Hallo world expos.

    this is always an issue between scanning a hard copy and doing it online. Technically it can be done in Turnitin Grademark by clicking on the comments list iconcontent list icon(dark background) which then generates a list by page number and amount  of comments per page.

    content list grademark

    Of course this requires graders (internal and external) to adopt a different approach which may be out of their comfort zone.  Compare and contrast with having to download papers and send them on to the correct recipient either as a zip file or as printed copies.

    Regardless of this issue, the main concern is surely the quality of feedback given in a reasonable timeframe.

  • #22159

    Linda Creanor

    Yes, I think Jim’s right – it’s having to do things differently which makes people uneasy and it’s true that reading long documents online can be awkward if you’re not used to it.  Once markers get used to grading and giving feedback online however, they generally do find they can do it more quickly over time. Most importantly, students really appreciate easy online access to feedback and grades – maybe that’s the most persuasive argument of all! It certainly seems to be a big driver for us here at GCU.

    If online marking makes students pay more attention to the feedback they’re given, even before they get their grades, that’s a big plus point for enhancing learning (and addresses some of Nicol’s principles for good feedback noted in the Jisc publication http://www.jisc.ac.uk/media/documents/programmes/elearning/digiassass_eada.pdf).

    Does anyone else have any ideas on how they’ve dealt with this challenge?

    I’m afraid the screenshots above aren’t displaying for me 🙁



  • #22161

    Linda Creanor

    I take that back – I can see the screenshots now – must have been my slow machine…


  • #22163


    Thanks for your comments, gcujime and Linda.

    I am well aware of Grademark, made a point of knowing it well to get colleagues to “meet” the system.

    As with meeting someone, and as you point out, the difficulty is in managing change; when people have so other many things to manage anyway…

    I suspect moving from hard copy to electronic would only cut the feedback turn-around from 3 weeks (feedback+mark together on hard copies) to 2 weeks (electronic feedback alone, then moderated mark later on).

    Is it big enough a difference to justify the struggle?

  • #22197

    Rose Heaney

    Hi World expo

    If I was extolling the virtues of online feedback & marking, I wouldn’t major on faster feedback as I suspect that might well not be achieved in the early days.

    However, there are a lot of potential advantages – if it’s used well – as Linda says above.

    For students:

    • easier access to feedback  – at our institution (UEL) more students view online feedback than ever collected hardcopy
    • better feedback quality e.g. no legibility issues, more consistency if quickmarks are used well (see below)
    • students have ready access to feedback for all their assignments in a way that rarely happens with hardcopy
    • audio feedback option is appreciated by our students

    For staff:

    • ability to see Originality report alongside the submission
    • more consistency across large cohorts with multiple markers who can share rubrics, quickmark comment banks and see at a glance how other markers are operating
    • easier administration of scripts where there are multiple markers on large modules
    • integration with the Moodle/Blackbboard gradebook
    • offline marking via ipad

    I know staff who were originally resistant who have been won over once they started using it. However they need to be supported as not all features (and benefits) are apparent at first glance.



  • #22233


    Thanks for your insights, Rose and Santanu.

    Re Santanu’s question, standardisation is the S word in a world of (academic) freedom. In my current project, we’ve decided with colleagues to talk about harmonization so that individual staff are not made to feel like machines but like individuals, who they definitely are, and for the better.

    As far as students’ course and programme evaluation goes, there is a lot of evidence in favour of ‘standardised feedback’ (Santanu’s words 😉 ).

    Ultimately, it’s about the student’s experience, and as for many experiences, students expect – rightly or wrongly – a degree of consistency, including in terms of feedback.

    Subsequent question might be: should students be prepared to expect a consistent feedback provision – which the scholarship seems to encourage – or a diversity of feedback styles?

    (But might deviate too much from the original board topic)

    • #25803


      Thanks for your ideas Rose. I just thought I would add my experiences of electronic assessment as a teacher and as a learner.

      As a teacher

      I mark quite length assignments which are submitted in Word. I use the reviewing feature to add comments, questions, ask for clarification and suggest improvements. I thought the IoE feedback framework Lisa Grey mentioned in the webinar was an interesting way of thinking about my own practice – might be a useful tool to keep to hand when I am assessing to ensure I provide the most useful feedback. When I use formative feedback in this way I can really see learners using my feedback, and it improves the work they submit for summative assessment.

      I thought I would find it unpleasant to read and mark online so much, but I got quite used to it fairly quickly. I even found a (not foolproof) way of adding my comments through voice recognition rather than keyboard when I hurt my arm.

      But are we really calling writing an essay in Word and marking it in Word electronic assessment or online assessment?

      We use City & Guilds evolve online testing for some tests at work. That is definitely online assessment.

      As a student

      Lisa Grey made an important point about anxiety related to assignment submission and I felt very strongly that online submission is helpful with this. In the olden days at university you had to drop your assignment in the tutor’s pigeonhole, and they posted it back in your pigeonhole. I was always concerned that assignments would get lost, or sabotaged! At the Open University you had to post your end of module assignment to Milton Keynes, and you were specifically instructed not to send it registered post as there was no one there to sign for receipt – given the Royal Mail’s reputation this was terrifying! I heard of other students driving to Milton Keynes to drop off their end of module assignments rather than risk the post! Now you submit these electronically, and as Lisa highlighted, you get a receipt and can check the submission. It doesn’t remove the anxiety, but it alleviates it.

      But again are we calling this electronic assessment or online assessment when really it is just a postal service?

  • #22244


    (Not sure why Tom’s post won’t appear in the thread) but interesting point about pros and cons.

    Tricky situation is when the time of students is balanced with the time of staff. They are equally valuable, ie. no-win situation to decide for or against TEL in assessment marking/moderating.

    • #22246

      Tom Franklin

      Wouldn’t it be nice if student and staff time were considered equally valuable! My proof that they are not is the lecture. The lecture is known to the least efficient form of learning, but it is an extremely efficient form of teaching. A couple of hours of preparation and an hour of delivery and 180 people have had an hour’s lecture and it has only cost the lecturer one minute each. So even if each one only learns a little it is still very cost effective.

  • #22317

    Rose Heaney

    Hi Santanu


    I agree that varied assessment formats are definitely the way to go.


    It also needs to be relevant to the learning in format & in kind. An essay is not a suitable form of assessment of a practical skill for example.


    Just in case I gave the impression we only use Turnitin/Grademark we don’t – but it tends to be the standard for essay type assessments of a certain length. (Longer dissertation type essays are usually marked on hard copy for example.)


    Feed forward or assessment for (not of) learning is another key aspect – we had a good discussion with Keith Smyth about this during week 2 (or was it week 1?).



  • #22322

    Ted O’Neill

    Quite a while ago in an English for Academic Purposes course at a Japanese university we used Criterion as an assessment tool. I had no choice in using it and was skeptical at first, but soon really embraced it.

    The software did a surprisingly good job at identifying topic sentences, supporting sentences or their lack. Mechanical errors and grammar problems were identified clearly too. We had a site license which allowed students to submit and resubmit as many times as they liked. So, they could get that kind of feedback at the exact time they were ready for it and as often as they were ready for it.

    Criterion scored essays from 1-6 and I typically required students get a score of at least 4 before conferencing with them. Later as they progressed, they had to score a 6. It allowed me to focus on the higher order issues and let the students work out the lower order stuff with the software. It was great for everyone. Instead of getting a lot of correction from me, it came from the tool. Also, it separated these two different issues. Finally, it gave students time to revise repeatedly before getting a final assessment for a grade.

    I hope to be using it again in my next teaching post.

  • #22856

    Linda Creanor


    Thanks for sharing these experiences – it sounds as if you have made really valuable use of online MCQs for both pre- and post testing. You’re right, MCQs can provide instant feedback on levels of knowledge which can be very helpful,  however the feedback can often be quite minimal and I guess it’s the quality and detail of feedback as much as the design of the questions that’s important here.

    Yes, publishers’ text book companion tests and web resources can be very useful tools – even more so if they allow some adaptation!



  • #23082

    Moira Sarsfield

    I recommend looking at GRE subject tests (http://www.ets.org/gre/subject/about) for examples of MCQs at higher levels in Bloom’s taxonomy.


  • #23317


    Thank you, Tom, for your reply.

    I found your point about MCQs in numerate disciplines interesting.  I entered HE (as an SL in Corp Strat) transitioning via a secondary PGCE (Bus Ed) for which I received APEL credits in part on the basis of some p/t FE lecturing I’d done for a couple of years preparing students for the exams of a finance professional body (ICM).   Some of the material covered in the ICM syllabus involved ‘numbers’ and NOT being a beancounter I scrabbled around – as a practitioner – for appropriate ‘teaching aids’.  One of the things I found was a very good publication by the ILO on Balance Sheets which included a helpful manual, multiple choice self-test which I – and my students (many of whom were generally already familiar with the material ‘professionally’) – thought excellent.  I’m wondering if this hasn’t been ‘upgraded’ to an online/automated version by now – if the ILO hasn’t done this doubtless someone else has!  I expect there’s something – probably a lot! – on BizEd..

    Musing about the above has prompted me to recall doing a so called numeracy MCQ test as part of my ITT (Initial Teacher Training).  The brouhaha surrounding the test had me very apprehensive (I was a ‘returner’ in my ’40s) although when I actually sat the test I was left with a sense of anti-climax – what was all the fuss about?  I found it remarkably straightforward.  But I mention the test as I seem to remember being told it had been structured in such a way that the questions got ‘harder’ on the basis of the individual test-taker’s performance during it, i.e. the question ‘path’ altered depending on the accuracy of the selections of the person taking it.  Such a feature I think could be very useful for ‘stretching’ more able students on an ‘individualised’ learning basis – though the programming would for me be problematic.  Again, I expect this is a common feature of MCQ testing now and, again, I would agree – it underlines the complexity of setting a good MCQ.  Of course, ‘understanding’ an MCQ score – both as student and teacher – in this context could become rather complex.

    Having entered HE I remember ‘teaching’ an introductory economics elective to HE students (weirdly, they were doing an IT degree) the assessment paralleling one of the foundation modules for another finance body (CIMA), the thinking behind some questions for which was of the sort you describe.  I recall the subtleties of some of the answers required quite a bit of verbal elaboration ‘for the penny to drop’.  This fits neatly with your point about ‘distractors’ and, for me, reinforces the importance of the role of the teacher not just in the drafting of the assessment but in the ‘debrief’.

    I sometimes find that certain students are not challenged by the ‘standard’ MCQ selection set.  Over the last few years I’ve taught students of widely varying ability in the same class, in the same institution; increasingly being surprised, even at so called elite institutions, at the extent of the range present in the class.  I mentioned political aspects of (e)assessment previously – I’m thinking the above feature worth pursuing as unlike secondary environments I’ve taught at, there usually isn’t the option in HE cohorts for ‘learning sets’.

    One aspect of MCQs we haven’t stressed enough I think, is MCQ language.  I alluded above to the importance of grasping vocabulary, keywords, etc. but overall comprehension can be still be lacking (with a consequent negative impact on performance).  Irrespective of supposed prerequisite IELTS attainment, whatever, I’ve often found low scores simply reflect English language ability rather than inadequate grasp of subject per se.  I’m sure I’m not alone in my discomfort about how to deal with this – my emotions run the gamut of resignation through peevishness… to despair at the admission policies and funding pressures that really ‘account’ for the situation.


  • #23320


    Thank-you, Moira, for your reply directing me to the  GRE subject tests (<span style=”color: #079948;”>http://www.ets.org/gre/subject/about</span&gt;) for examples of MCQs at higher levels in Bloom’s taxonomy – it was kind of you to go to the trouble.

    Clearly the GRE tests have wide acceptance (in the US and beyond).  I found the range of subjects covered interesting – my initial thought was the tests reflect an American model of HE.  Nothing intrinsically wrong with that, of course, but elsewhere on this site I’ve mentioned Threshold Concepts.  I’m thinking of an interesting discussion apropos teaching plans (for Economics) in this context challenging the hierarchical ‘ladder’ implicit in Bloom – see Land et al (2008) ‘TCS within the Disciplines’ and the NAIRTL Conference papers for more; I think Davies’ working paper: ‘TCS: how can we recognise them?’ especially addresses this very well.  Put another way I guess perhaps I’m saying I’m not sure MCQs actually deal with TCs (rather than Bloom) – though the two things do, for me, link.


  • #25424

    Mark Schier

    I’m still enjoying the challenges of the ocTEL program, and posting something on e-assessment and e-feedback</span>

    The context I am using here is a lab class in blended mode.

    In terms of speed of feedback and assisting students with their knowledge of the material, a good quiz with mixture of multiple choice, select the word, hotspots, complete the sentence, order the terms, match the words, etc. Often these are frowned upon in the context of other interactive work, but they still have a place.

    It is effective as it can be done by the students when they are ready, and can help them consolidate the material from the lab class, particularly if they can retake the quiz.

    In my experience, an approach that provides self directed learning and peer support is one where students can trust each other and the moderator/facilitator/tutor. Technology can help by improving the availability of information, the ability to asynchronously or synchronously connect, and the feeling that they belong (harder to get).

    For tutors this presents some challenges in terms of connecting and having online learners build trust in a fairly short time if they have not met f2f before. The opportunities are only limited by what use things can be altered, adapted and tried.
    <ul style=”margin: 0px 0px 18px 1.5em; padding: 0px; border: 0px; outline: 0px; vertical-align: baseline; list-style: square; color: rgb(85, 85, 85); font-family: Arial, Tahoma, Verdana, sans-serif; line-height: 22px; “>
    <li style=”margin: 0px; padding: 0px; border: 0px; outline: 0px; vertical-align: baseline; background-color: transparent; “>

    • This reply was modified 6 years, 1 month ago by  Mark Schier.
    • This reply was modified 6 years, 1 month ago by  Mark Schier.
    • #25805


      Thanks for your contribution Mark – quite a different context to what I am used to.

      Reading through the JISC document, I found that they presented online assessment as though it was entirely unproblematic, but I think your comments show that remoteness is a problem. As a learner I have found this myself : if an online tutor has little virtual presence, then there is no relationship or trust, and feedback is hard to take on board.

  • #22230

    Santanu Vasant

    As someone who never got online feedback as an undergraduate or postgraduate, but have supported a number of courses over the years with online feedback via a VLE, ePortfolio solution etc I can see the benefits but also time spent on a screen marking must be tough (was a big reason a few years back for lecturers not using online feedback. Until we get better paper like displays and better systems we won’t get everyone marking like this. I think the Tablet is a game changer in many ways – in that you can view / comment on documents. I think however the long term use and long hours of use will show if this is the way forward, but for now we have entered the room of Jean-Luc Picard in Star Trek: The Next Generation! 🙂 Time for us to ‘Make it so’ (sorry to non Trekkies – this must seem very odd behaviour!)

    I think what is even more powerful is feed forward, so setting an small assignment feeding forward and offering constructive conments for a similar large assessment, of course online, does anyone do anything like thus?

    Some great comments so far, I have found in previous years the practice in this area is sometimes the most diverse from lecturers using pen and paper to typed feedback to audio feedback! Question: Do you think feedback should be standardised across a programme and if is how?!

  • #22524


    I often choose automated self-marking Multiple Choice Question test batteries (e-MCQs), commonly provided by authors/publishers on text companion websites, as an e-assessment resource.  I do this largely because they are convenient (for students and for me as teacher) in that they are ‘plug and play’ and can be taken/marked when it suits. With the pressure in HE to be seen to respond to NSS / PTES, e-MCQs provide tangible evidence of having enabled/provided student feedback – useful politically – and simultaneously an easy means of introducing dialogue about assessment distinguishing between appreciation (praise), coaching (tips for improvement), and/or just evaluation (a mark, especially relative to others) – useful pedagogically.

    I find e-MCQs particularly helpful in reinforcing keyword vocabulary.

    I recommend PG or post-experience students take MCQs before starting a topic as well as afterwards.  by flagging areas that are new/ in need of revision) and by repeating the test after class/ at the end of a topic they will (hopefully!) see an improvement which affirms progress.  time studies.

    Most e-MCQ e-assessments I use cross-reference correct answers to text page(s), so e-MCQs can also relieve me of having to individually laboriously explain all ‘wrong answers’.  This encourages independent learning.

    I typically ‘start students off’ on an e-MCQ in-class with a quickie verbal Q&A session (displaying the e-MCQ on-screen as a backdrop), perhaps dividing the class into groups/pairs, calling for a show of hands with answers and then probing/walking through answer selection reasoning ‘out loud’.  I’ve found this approach very popular with more challenged FHEQ Level 4 students.  There is the opportunity to use clickers here with large groups, say, in a banked lecture hall as part, say, of a ‘one minute paper lecture close’ (Stead, 2005).

    Some of the problems with e-MCQs I find are that:

    – There can be problems with text companion website e-MCQ access (because of the need to obtain /purchase ‘electronic keys’

    – Answers (provided by text authors/publishers) can surprisingly sometimes be ‘wrong’ or confusing – it therefore pays to check/anticipate answers, ‘working back to a pre-learned state’ to imagine how students interpret questions that can be difficult.

    – Setting-up and managing summary reporting typically offered by e-MCQ software can be very time consuming. (N.B. Tutor production of e-MCQs can also be problematic – my attempts at persuading IT/LT staff to assist have been invariably met with responses of ‘no budget’, etc.)

    – Generally speaking e-MCQs assess at the bottom end of Bloom (so notwithstanding the benefits above this makes MCQ selection for summative purposes inappropriate in terms of constructive alignment with session/module learning outcomes);

    – It’s typically only Bigg’s ‘Susans’ – intrinsically motivated, deep learners – that ‘bother’ to do e-MCQs (outside of class); e-MCQs can encourage a shallow, ‘guess’ approach (by Bigg’s ‘Roberts’) to learning/ study/ revision.


    The related electronic flashcards and crosswords commonly also found on companion websites are similarly well received – especially by English as a Foreign Language students.  Even with native English speakers at PG level I have often found students are ‘lazy’ or imprecise in their understanding and use of appropriate key words and concepts.

    There is usually an option for test-taker anonymity which many students like.

    Where students buy second hand copies of texts these keys are invariably missing or have been invalided through prior use.  Unfortunately library subscription to e-book text versions rarely includes companion website access. </span></span></span></p>

    This problem will be familiar to those taking a ‘threshold concepts’ approach to pedagogy.  As Cousins (2006) notes (when explaining the concept of TC irreversibility): “One of the difficulties teachers have is that of retracing the journey back to their own days of ‘innocence’, when understandings of threshold concepts escaped them in the early stages of their own learning’.

    • This reply was modified 6 years, 2 months ago by  arkmba. Reason: pasted draft - formatting issues!
  • #23014

    Tom Franklin

    You raise some interesting points, and I would agree that MCQs often are at the lower end of Bloom’s taxonomy. However with care they don’t have to be. I have seen MCQs (mostly in numerate disciplines) that require you to work through the problem and come to an answer; memory is not sufficient you need to understand the nature of the problem, apply the appropriate methods and then find a solution. Of course, if your solution differs from all the answers then you know you have gone wrong.

    Another critical issue with MCQs for me is the nature of the distractors (wrong answers). They need to be plausible, and if the MCQ is being used formatively then they should diagnose different types of misunderstanding to help the student. Thus, setting a good MCQ is actually very difficult.

  • #23347


    We are just going down the route of encouraging all faculty to use Grademark in Turnitin to provide feedback and grades for assignments. We have suggested that, as well as the bubble comments, a Mark Form is used (part of the Rubric function) to fill in feedback associated with particular criteria. So far it has been used to mark students’ self-reflective reports on their Group Projects. I am waiting to hear how they got on with it. I had no adverse reactions initially apart from a couple of staff who had Turnitin 423 errors!

    However, the reports they were marking were relatively short and I am wondering if there is a critical point at which marking online becomes untenable? If you have 150 students submitting 20,000 word Theses – is this really fair on the academic to mark purely online. Is there some study or finding out there that can provide guidance as to best practice?
    Rose earlier mentioned that they mark larger documents like dissertations on paper, and I understand why! As ‘green’ university we are trying to get away from the paper-based method as there is much photocopying and distribution by internal post involved. So I can see both sides of the argument.

    And there is also the problem which still hasn’t been resolved by Turnitin – second marking / moderating … so we can’t mark submissions like Thesis online properly anyway yet.
    I would be really interested to see how others have decided to cope with this.

    • This reply was modified 6 years, 2 months ago by  Louise.

The topic ‘Experiences or expectations of e-assessment and e-feedback’ is closed to new replies.