Averaging scores benefits only two kinds of students: those who show understanding consistently, and those who come into the classroom already understanding the content. If by chance the inequity of that is unclear, let me explain…
Let’s start with every other kind of student, like the one who comes into class with less understanding—for any reason outside of the teacher’s control—broadly described as being less-privileged. A less-privileged student with lower understanding will have lower scores than a more-privileged student who already has more understanding. This is a fact. As the year goes on, the student with lower understanding certainly has the potential to learn content and get higher scores. However, when all the scores are averaged, the less-privileged student will have a lower grade even if making large gains over time.
Now, consider the kind of student that averaging benefits: one who comes into the classroom already understanding content and who starts off with high scores, not low ones. As the year goes on, this already-successful student will have their high scores averaged, and end up with a higher grade than a less-privileged student even if making zero gains over time. This last point is a research interest of mine, and one that isn’t given enough attention when we talk about grading for equity. Whereas the common thinking with a standards-based approach is that it doesn’t matter how a student learns the content and meets the standard, only that a student learns the content and meets the standard, such thinking doesn’t account for any massive gains that still fall short despite conditions outside of school. Nor does such thinking address the already-successful student who can meet the standard with no effort at all. Granted, grading effort/participation is generally a no-no, but what message is being sent if a student can meet standards without learning anything? If they’re privileged enough to have knowledge and understanding, where does individual growth come in when you think of the lifelong learner that so many schools claim to produce?
Continue reading →
Here’s the third post this week with thoughts on assessment in addition to Friday’s on self-grading & batch assessments, and Thursday’s on averaging & delayed assessments.
If teachers were to just stop grading grammar, Latin (and other languages) would instantly become more accessible to students, as well as afford more planning time for teachers.
This is no joke.
There are some teachers excited about grammar and want to share that with students. Go ahead! I’m not saying they shouldn’t, but I’ve observed many (all?) of the negative effects of doing so, especially in K-12 public education, which mostly begin with grading. If you want to teach grammar, just don’t grade it. Here’s why…
Continue reading →
My interest in assessment & grading began shortly after the first few months of teaching right out of grad school. I noticed that some students did well with the content from the first few textbook chapters, but others didn’t do so well at all. Thus, beginning the year with low self-efficacy that was hard to turn around. By November, I realized that students were comfortable with the vocabulary and grammar from the first few chapters of the textbook. Then hit me; if I had just delayed those first assessments by a month or so, ALL STUDENTS would have aced them! What is more, the students who actually improved had that lower 1st quarter grade (e.g. C) averaged with the new, higher grade (e.g. A), producing a skewed reflection of their ability (e.g. B). None of this made sense; I was playing gods & goddesses with my students’ GPA.
I began researching how to arrive at a course grade that actually reflected ability—not just the averaging I was familiar with and somehow never questioned (or was even taught about in grad school). I spent months reading up on grading from experts like Marzano, O’Connor, and even some stuff from Alfie Kohn. I moved towards a system that showed where students were at the very moment of the grading term’s end without penalizing them for understanding the content slowly at first, or even having those bad days that students inevitably have. This was how I came to use Proficiency-Based Grading (PBG), and subsequently the kind of no-prep quizzes that haven’t added anything to my planning time in years.
If you’re ready for that, hooray! If not, at least consider 1) NOT averaging grades, as well as 2) delaying your assessments until students have already shown you that they understand the content!
**See this post for all other grading schemes*
Here’s a new idea inspired by advice I was giving on various DEA and Proficiency grading weights. In other posts, I’ve written how my DEA weight has been anywhere from 0% to 50% of the grade. You could also try this sliding scale throughout the year…
DEA = 100%
Proficiency = 0%
DEA = 50%
Proficiency = 50%
DEA = 10%
Proficiency = 90%
DEA = 0%
Proficiency = 100%
A grading scheme like this would establish very clear expectations of how important it is to exhibit behaviors and routines that lead to language acquisition in class (e.g. Look, Listen, Ask). This would work best if you have the admin support to manually override the final grade with just one Proficiency grade from Quarter 4, as suggested in other iterations of my grading systems. Why? We don’t reaaaaally want the 4 quarters to be averaged, but if they are it’s not the end of the world. This kind of grade is far more forgiving so the focus can be on input and not assessments.
N.B. Proficiency is given 0% weight at the start of the year. This doesn’t mean that students see “0” in the gradebook. What this means is that their 95, which they see in the gradebook, holds 0% weight because in the sliding scale scheme we’ve placed all 100% weight on DEA for first quarter in order to set expectations and establish routines.