One of the ways we have found in improving student assessment choice, increasing transparency in the assessment process and encouraging students to engage with their feedback was to use rubrics in the assessment process. I know rubrics are contested by some, and are not right for every discipline and in every context, but we had some really positive results by using them. It isn’t easy to create a rubric, especially a scored one where it works out the grade, but most academic members of staff have a marking scheme of some sort, or even just tacit knowledge to know what a piece of work is ‘worth’. It takes a while to get it right and working in the way it should. I worked with another member of staff at the University on a project which tried rubrics out within assessment. The steps were:
- The student completed the assignment, they could hand in their assignment in any format they wanted as long as it could be submitted directly to the University VLE or linked to. So they could write an essay, do a PowerPoint presentation, do a short video or a podcast, or use whatever format they chose as long as it met the learning criteria and could be accessed.
- Whilst the member of staff was marking the assignment using the rubric, we asked the students to self-assess themselves against the rubric, and submit the rubric to us.
- When the work was handed back we asked the students to compare the marked rubric against their own.
- If the students felt that they had been unfairly marked on any of the criteria, they could appeal. They had to write no more than 500 words referring to the rubric and their original piece of work submitted.
- We compared the results of the self-assessment with that of the tutors.
We found the following:
- That using the rubric created complete transparency of where the marks came from. The students really liked this instead of not being sure of how a particular mark was arrived at.
- Using the rubric allowed students complete choice on choice of format, which played to their strengths rather than just favouring those students who were good at writing or exams.
- Breaking the mark down into the different criteria was also useful, as it gave the students some really clear information on how they could improve their piece of work or future assignments.
- Giving the student the chance to appeal gave them agency over the process. Only two students took the tutor up on this option, one convinced the tutor they should have been graded slightly higher, the other admitted they were trying it on and accepted the given grade.
- The self-assessment task gave the tutor some really interesting data:
- Firstly it gave the tutor some really rich diagnostic data as it broke down the information into individual criteria, the tutor could see if any of the criteria had been misunderstood. In this case the students rated themselves much higher than the tutor on one of the criteria, which was to do with secondary resources. They had obviously misunderstood what this involved, so intervention could take place, and some training/explanation of that could immediately be put in place to correct that.
- Most students saw themselves as in the 2.1 category (60-70%). Not sure if this was wishful thinking by some but those that were 2.2 or 3rd students rated themselves higher, and maybe through modesty the 1st students (over 70%) also rated themselves as 2.1s.
- The students who rated themselves as 2.1s and were graded by the tutor as 2.1s were not scoring consistent 2.1s across all criteria. This was interesting as I think that those students who expected a 2.1 and then subsequently got it, would probably not have engaged with their feedback much if it had not been for the rubric. The rubric showed them that they were perhaps scoring a 1st in some criteria and lower, even down to a 3rd in other criteria but gaining an overall average of a 2.1. Having that in front of them in the rubric showed them clearly what they needed to do to improve their work and gave them the aspirations of getting a first (perhaps they had never thought this possible).
This activity was improved in subsequent years to have the students engage with their criteria before submitting their work, and involving peer assessment before having a chance to improve their own piece of work. The tutor was also able to compare years and cohorts to see how the students have improved. It was very interesting to be involved with this, and the students really liked the rubrics being used as it was crystal clear in how the marks had been arrived at and exactly what they needed to do to improve their grade.
Some further references about using rubrics (including our conference presentations about the above project)
Campbell, A. (2005). Application of ICT and rubrics to the assessment process where professional judgement is involved: the features of an e-marking tool. Assessment & Evaluation in Higher Education, 30(5), 529-537.
Ellis, C., & Folley, S. (2009). Improving student assessment choice using Blackboard’s e-assessment tools. Paper presented at BbWorld Europe 2009, April 6–8, in Barcelona, Spain.
Ellis, C., & S. Folley, S (2009). The use of scoring rubrics to assist in the management of increased student assessment.
Hafner, J., & Hafner, P. (2003). Quantitative analysis of the rubric as an assessment tool: an empirical study of student peer-group rating. International Journal of Science Education, 25(12), 1509-1528.
Jonsson, A., & Svingby, G. (2007). The use of scoring rubrics: Reliability, validity and educational consequences. Educational Research Review, 2(2), 130-144.
Meier, S., Rich, B., & Cady, J. (2006). Teachers’ use of rubrics to score non-traditional tasks: factors related to discrepancies in scoring. Assessment in Education: Principles, Policy & Practice, 13(1), 69-95.