-
Notifications
You must be signed in to change notification settings - Fork 1
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
Showing
1 changed file
with
67 additions
and
0 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,67 @@ | ||
--- | ||
title: Continuous rubric | ||
slug: continuous-rubric | ||
--- | ||
|
||
## Intent | ||
|
||
Improve upon existing matrix-style rubrics with a type of rubric that indicates "zones" of standards and focuses on capturing the differences between standards, to make assessments clearer to students and easier for instructors to make assessment judgments. | ||
|
||
|
||
## Problem | ||
|
||
Rubrics are commonly used to assess student work. They typically appear as matrices of standards descriptions, e.g. what "proficient" vs. "not yet" work looks like for a learning objective. It is often difficult to succinctly and precisely describe a standard so that it is unambiguous. Even the best-described standards leave room for judgment calls: for instance, how do you evaluate a student who's achieved most, but not all, of the descriptors for a particular standard? When an assignment assesses multiple concepts, some concepts may require fewer "levels" or descriptors than others, leading to pressure on instructors to fill the boxes with less meaningful or less applicable descriptors. These factors reduce the utility of rubrics as the time-saving, streamlining grading tools they were designed to be. | ||
|
||
|
||
## Solution | ||
|
||
The _continua model of a guide to making judgments_, or GTMJ for short, improves upon traditional rubrics in several key ways: | ||
|
||
1. **Standards continuum**. Achievement is conveyed as a continuous arrow, from the description of the lowest achievement level up to (and beyond) the highest-defined achievement level, for a learning outcome or concept, rather than in discrete boxes. | ||
2. **Variable standards descriptors**. The rubric contains only the number of descriptors necessary to indicate the progression from the lowest standard to the highest standard. | ||
3. **Nested standards descriptors**. Each descriptor higher on the continuum implicitly contains all of the previous descriptors. This way, instructors need only specify the key differences between standards descriptors. | ||
4. **Standards zones**. Standards are depicted as ranges along the continua for each learning outcome / concept being assessed, rather than a discrete set of descriptors pertaining to a particular standard. This means that students do not have to meet all of the descriptors for a standard to meet that standard. | ||
|
||
|
||
**NOTE FROM THE AUTHOR OF THIS PLAY: I really think this could use an illustrated example. There is one in the paper, but I could also make up my own.** | ||
|
||
|
||
## Applicability | ||
|
||
**Caveat: the source does not describe the use of the GTMJ in a computer science or STEM classroom, so I am extrapolating from what was presented in the paper.** | ||
|
||
The GTMJ would be best suited in a _specifications grading_ context, where assignments typically touch on multiple learning outcomes or concepts, or a _standards-based grading_ context. It provides a more visual description of student achievement on each objective: students can clearly see how far up the continuum their work lands, and how much further they need to go to achieve a particular standard. It may also work well in medium-sized classes, because it can streamline some of the judgment calls that instructors typically make with matrix-style rubrics. | ||
|
||
|
||
## How to Implement | ||
|
||
Rather than trying to capture all of the elements of a particular standards level, implementors should articulate the _key distinguishing differences_ in behaviors as a student's work approaches the higher points of the continuum. These differences then become the descriptor points on the GTMJ, and often point to where the standard zone boundaries should lie. | ||
|
||
|
||
## See Also | ||
|
||
_List any other related plays here as a bullet list of chapter links. | ||
Then remove this text._ | ||
|
||
|
||
## Source | ||
|
||
Source: Peter Grainger & Katie Weir (2016) An alternative grading tool for enhancing assessment practice and quality assurance in higher education, Innovations in Education and Teaching International, 53:1, 73-83, DOI: 10.1080/14703297.2015.1022200 | ||
|
||
Described by: Amy Csizmar Dalal, [email protected] | ||
|
||
|
||
## References | ||
|
||
_Insert references to publications or web pages describing, evaluating, or | ||
sharing experiences with this technique. Then remove this text._ | ||
|
||
|
||
## Community Discussion | ||
|
||
Community members are free to comment on, ask questions about, share | ||
experiences, or otherwise contribute to knowledge about this play by | ||
posting comments below. | ||
See {% include chapter-link.html slug="join-discussions" %} for details. | ||
|
||
* Insert a comment here. |