in collaboration with Nele Simons, Faculty of Design Sciences
Assessing complex competences as objectively and transparently as possible is not that easy. Competences are usually described as integrated sets of knowledge, skills and attitudes. This means that assessments should also be integrated. Classic checklists or answer models are certainly valuable, especially when answers can be assessed as ‘right’ or ‘wrong’. Integrated assessment, however, is more about the solution process, certain nuances and the big picture. This is why more and more lecturers now use rubrics, as they are suitable for assessing or providing feedback on complex tasks or assignments.
What’s a rubric?
A rubric is a feedback and assessment tool that usually takes the form of an already filled-in matrix. What a performance or product should look like is narratively described for each level along a quality continuum. As the example below shows, a rubric usually consists of:
- assessment criteria based on the envisaged competences (in the left column);
- gradations of quality or performance levels (in the top row: ‘insufficient’, ‘sufficient’, etc.);
- descriptions of behaviours or competences (in the other cells).
Layout of a rubric
| insufficient | sufficient | good | excellent |
Category 1
| description of behaviour or competence for this level | description of behaviour or competence for this level | description of behaviour or competence for this level | description of behaviour or competence for this level |
Category 2
| description of behaviour or competence for this level | description of behaviour or competence for this level | description of behaviour or competence for this level | description of behaviour or competence for this level |
Category 3
| description of behaviour or competence for this level | description of behaviour or competence for this level | description of behaviour or competence for this level | description of behaviour or competence for this level |
Practical examples
Here are two specific examples: a rubric used in the ‘Scientific Reporting’ programme component (Training and Education Sciences, in Dutch) and a rubric used in the ‘Introduction to Design’ programme component (1st year Bachelor of Architecture, Faculty of Design Sciences, in Dutch).
What to use it for
Rubrics were originally meant for use in assessment that involves awarding marks (summative assessment). However, rubrics have since also proved valuable in providing feedback to students (formative assessment). Interim feedback based on a rubric will tend to be more detailed and of higher quality than merely giving a mark. Lecturers can determine and visualise the learning progress of students through the successive use of rubrics.
When to use it
Rubrics help lecturers and teams of lecturers at the start, by making their expectations more explicit and their teaching more focused. Rubrics can also be used during class or in assignments. Having students list the requirements for a paper or presentation helps them to become familiar with the characteristics and expectations of academic texts, for instance, or with ways to tailor a message to different target groups. Furthermore, rubrics can also be used in peer feedback or assessment (with appropriate clarification/practice; see this ECHO Tip from 2017 on peer assessment). This allows students to familiarise themselves with what is expected and how they will be assessed. Finally, rubrics also lend themselves well to giving feedback after an exam (or, if necessary, to justify a low score).
On Blackboard, you can also create digital rubrics and add feedback to the students in the cells for each category or criterion (see the manual, including an example, in Dutch).
How to get started
Working with rubrics requires intensive preparation. This manual detailing how to use rubrics (from the Faculty of Design Sciences, in Dutch) takes you through each step of the development process.
If you or your predecessors have already worked with a similar assignment:
- you can use previous representative works of students with different performance levels as a starting point to fill in the rubric;
- you can start fine-tuning by reviewing the debatable cases that were more difficult to assess;
- when creating a rubric, you can also start by using any previously given written feedback, if available.
This process of cross-checking (and comparing the debatable cases to it) should be carried out with fellow lecturers whenever possible, and repeated after the course to validate the rubric.
How detailed should the rubric be?
Two aspects can be at odds when working with rubrics: the need for a frame of reference to provide structure on the one hand, and the need for creative or academic freedom on the other. This results in a constant balancing act, as descriptions should be sufficiently detailed (to ensure consistency in different lecturers’ assessments), but not overly so.
- A rule of thumb to keep in mind is that the number of levels should be high enough to be effective, yet low enough to be reliable and workable.
- Keep the descriptions in the rubric concise: as each new level already encompasses the previous one, only new elements should be described.
- Rubrics are not always readily understood without clarification. Be sure to go over the different criteria and performance levels with the students: ‘Rubrics are not entirely self-explanatory. Students need help in understanding rubrics and their use.’ (Andrade, 2005, p. 29).
How to arrive at a mark
Although quality feedback is the essence of using rubrics, in many cases you still need to award a mark. The scoring strategy in rubrics can take different forms. You can take an analytical approach or a holistic approach. In the analytical approach, a score is calculated ‘automatically’ on the basis of the marks and weightings assigned to the different criteria and performance levels.
- Different performance levels can be linked to certain scoring intervals (e.g. pass = 10, good = 13; or pass = 10–14, good = 14–18), after which you can differentiate further.
- If certain criteria are more important, they can be given greater weight.
- The more gradations of quality there are (see earlier: top row), the more a mark can be nuanced, but the more difficult it becomes to describe the rubrics distinctively.
A pitfall with rubrics is that they can result in overly analytical instruments, leading to fragmentation, which goes against the very principles of integrated assessment. This may result in a mismatch between the marks awarded and the way the lecturer had initially envisaged the assignment. Consider other, more holistic ways to arrive at a mark:
- The examiner awards the final mark after having seen the automatically calculated score. This allows some room for the assessment of aspects that are not well-covered by the rubric. When deviating from the proposed score, the examiner adds a written explanation.
- Finally, it is also possible not to calculate a total score at all. The examiner then awards a holistic final mark as usual, but after having circled the partial assessments in the rubric (see example, in Dutch). In this scenario, the rubric is more of an aid than an instruction.
Want to know more?
During an ECHO lunch meeting, the team of lecturers of the Training and Education Sciences study programme shared their experiences with the use of rubrics (in Dutch).
Van Petegem, P. (Ed.) (2009). Praktijkboek activerend hoger onderwijs. Leuven: LannooCampus.
Handleiding ‘Werken met rubrieken’, Faculty of Design Sciences.
Andrade, H. G. (2005). Teaching With Rubrics: The Good, the Bad, and the Ugly. College Teaching, 53(1), 27–31.
Huisman, W. (2015). Richten en toetsen met rubrics. Radboud Universiteit.
Reddy, Y. M., & Andrade, H. (2010). A review of rubric use in higher education. Assessment & Evaluation in Higher Education, 35(4), 435–448.
Van den Berg, B. A. M., Van de Rijt, B. A. M., & Prinzie, P. (2014). Beoordelen van academische schrijfvaardigheden met digitale rubrics. Onderzoek van Onderwijs, 43, 6–14.
Van Petegem, P., & Vanhoof, J. (2002) Evaluatie op de testbank. Een handboek voor het ontwikkelen van alternatieve evaluatievormen. Mechelen: Wolters Plantyn.