For my third poll in a large Facebook group of 12,600 language teachers in this mini-series on inequity and grading, I asked about averaging. A FRACTION of teachers responded this time, with a total of just 80. Compared to the previous poll participants of 585 for late work penalties, and then 625 for homework, I wonder if this is because averaging is something teachers let the gradebook handle without giving it much thought. Most teachers don’t question homework, but they still play a more active role in creating and assigning it, right? Even setting late policies is something teachers…do. Averaging, though? Looks like we might be in a “set it and forget it” situation. The thing is, the gradebook only does what we tell it to (or its default setting), so if we’re not thinking about that, well…
Poll results had the majority (60) doing some kind of averaging. Let’s unpack all that.
Averaging
In almost all teaching contexts, something is averaged in the grade, whether it’s arriving at a single letter/number to represent whatever happens during a grading term, or averaging the grading terms themselves to get a single letter/number for the course. In rare cases, the course grade can be represented without any averaging, such as with a manual override for the entire year, but the biggest problem remains: summarizing an entire year with a single letter/number. That’s not gonna get solved.
In general, though, the solvable problem with averaging is one of the easier inequities to see. In sum, a student’s low scores of September shouldn’t affect their grade when getting high scores in November. Current grading practices, however, almost always disadvantage that student, averaging all the low—which could be quite low for those still using zeros—with scores that might be at the same high level as classmates who didn’t struggle. That’s inequity.
Rubric
My favorite is creating a rubric that allows for trends (i.e., mode), or a more holistic look at assignments throughout the grading term. For example, let’s say there were the a bunch of Exit Tickets in the gradebook with the following scores: 85, 85, 75, 85, 85, 75, 85, 55. A pure average is a 79. Using the following rubric, though, the trend clearly shows 85s, a couple 75s, and a rough day as the most-recent score. The trend would be an 85, which matches what you know from in-class interactions with this student. We’re good.
Most-recent
If, however, you took an average of the most-recent three scores (which includes the 55 fluke), that’d come out to 72. Also, for those who use the bottom 60 grading levels of a 100-point scale (0-59), a zero in place of that last score would average to a 53. The teacher using a most-recent score system *must* be on the lookout for very low scores that don’t seem to be a reflection of what a student knows, understands, and can do. To do that, you’ve gotta know the student, and look at the trend. For this reason, I recommend the rubric and trend option alone, since trend also comes into play when averaging most-recent score.
Dropping Scores
This practice attempts to correct rogue assignment scores. In reality, though, it tends to give us an inaccurate reflection of what a student knows, understands, or can do. What if the lowest scores are the reflection of that?! If so, we as educators just set up our gradebook to actually drop the most-accurate picture of performance. Oops, right? The gradebook only does what we tell it to do, and it can’t distinguish. The best way to drop scores in an equitable way is a case-by-case basis. This requires teachers to know each student and evaluate whether the evidence they’re getting is accurate, just like grading most-recent work in standards-based-grading (SBG). If it looks like something’s up, don’t count it. That’s using content-area expertise, whereas the “set it and forget it” approach to dropping a certain number of lowest scores every grading term can be inequitable.