A high-quality rubric is essential for all assessments; helping students understand expectations, and teachers evaluate performance. But when it comes down to it, rubrics are not always used well in practice.
Research and development into the use of rubrics show a breadth of conflicting information and confusion around student understanding, while teachers do not always have access to the learning design structures needed to support best practice.
According to Colvin et al. (2016)'s “Exploring the way students use rubrics in the context of criterion referenced assessment”, a survey of Charles Sturt University students show “a chasm between reading rubrics and understanding them”.
The study showed while 71% of respondents said they always read the rubrics, only 43% said they usually understood what is required from reading rubrics.
“Perhaps too much is expected of students and their comprehension of not just criteria and standards, but what a rubric is and how to use it effectively.”
On the challenges of developing useful rubrics, Flinders University Emeritus Professor Janice Orrell (2020) highlights that levels of performance are rarely grounded in evidence-based learning frameworks but subjective terminology like “excellent”, making consistency difficult.
At a basic level, a rubric for assessment is usually presented in a matrix or grid as a tool to interpret and grade student work. It should make students aware of assessment criteria and standards, as well as the expectations related to that specific assessment task.
To design a rubric, teachers are able to use templates (try UNSW Teaching and Berkley Learning & Teaching) or design their own. No matter the starting point, designing rubrics requires subject matter expertise, alignment with topic and course learning outcomes, and using taxonomies to organise learning behaviours into hierarchies.
Despite complexities in understanding and creating rubrics in assessment, both students and teachers benefit from:
Providing clarity and a framework for students to complete assessments
Reducing variation of grades between multiple assessors
A means for self-evaluation and feedback, helping students to develop their self-regulation capabilities
To help students understand expectations and standards, including a rubric is a must for all assessments. Choosing the right one for your purpose is the first step and there are several types of rubrics including:
Uses numeric scoring against weighted dimensions. The maximum value for the standard rubric will be the same as the highest score across all dimensions.
Allows you to create a rubric that has no numeric scoring.
In addition, an assessment rubric can be analytic or holistic as explained by UNSW.
Have several dimensions, with performance indicators for levels of achievement in each dimension.
Assess the whole task according to one scale and are appropriate for less structured tasks, such as open-ended problems and creative products.
In choosing a rubric you will need to consider the benefits and drawbacks of each type.
More complicated assessments which you have created before would likely benefit from analytic rubrics where students are able to get specific feedback and guidelines around each component of the task.
If you’re releasing an assignment for the first time, or it is not as structured, for example, an open-ended task, a holistic rubric may serve your purpose as it allows for more impressionistic grading when the actual assignment may vary significantly.
Creating an effective rubric can be challenging. Whether you are modifying or creating one from scratch, it is important to ensure key considerations have been met:
Ensure the assessment task and rubric addresses relevant learning outcomes and taxonomy
Outline key components (task, criteria, scale and descriptors)
Define standards and ensure the results are achievable
Check for errors like the repetition of criteria
Get feedback from peers on the rubric for clarity and usefulness
Build understanding and engage with students on the rubric, allowing for open questions before they start the assessment
In addition, many academics are exploring how rubrics are best approached with a view to improving not just learning outcomes, but how students are engaging with assessment.
In Designing an Assessment rubric, Flinders University Emeritus Professor Janice Orrell outlines how expertise or “pedagogical content knowledge” is essential in creating useful rubrics, focusing on a transformational approach to learning.
Her development of template rubrics was “intended to assist departments and disciplines articulate what they believe is worth learning (what to learn) in their discipline and how the various levels or standards of attainment are recognised (how well was it learned)”.
To assist this, she used a number of frameworks that inform critical thinking to encourage the creation of “useful” rubrics that do not just ensure course content is learned, but that students develop independence, creativity and critical thinking and reasoning.
Likewise in looking beyond the role of a rubric in assessment transparency, Margaret Bearman & Rola Ajjawi (2019) argue for a similar end result, recommending an approach looking at invitation rather than explanation.
“Invitational assessment criteria can provide valuable opportunities for students to make meaning, in particular with respect to holistic, dynamic and highly tacit concepts which are poorly captured in writing. Teachers can therefore consider how their assessment artefacts promote learning activities – such as thinking, studying, regulating, writing, devising, and interacting.”
While we will continue to look at best practice and new thinking, traditional rubrics remain a powerful teaching tool and of the best ways to achieve consistent and efficient feedback and grading in assessment.
Head of Learning
Main Illustration by Craftwork Design
Cadmus is pleased to announce the finalisation of our latest investment round of $2M, supported by leading higher education investors.
Within Cadmus, educators have access to real-time analytics, monitoring the process of students’ development of an assessment rather than trying to catch AI-generated content upon submission.