Considering how impersonal the year felt, the responses from this end-of-year survey support an early prediction many of us had that learning and growth/development would take place this year after all, though certainly different from what we’ve expected in the past. To be clear, “learning loss” is a myth, and you should stop anyone trying to talk about that dead in their tracks. You simply cannot lose what you never had in the first place. It was a talking point used to get kids into schools ASAP, and nothing more. If students, or even just their learning were truly the priority, the conversation would be about improving living conditions for families at the societal level, as well as fully-funding our public schools.
Anyway, let’s start with the first question on my mind: grading. I’ve settled on the system after experience with a LOT of different ones, but what about students? The open-ended responses explaining what kind of grading students preferred are quite genuine. Scroll through the slideshow to see:
On my path towards simplifying everything I possibly can about teaching, this next grading idea is quite promising. Don’t get me wrong, my expectations-based grading rubric has worked wonders in terms of flexibility, equity, and efficiency. This new idea just complements the rubric by aligning more of what is expected during class with arriving at the course grade. It also adds more varied gradebook evidence.
In this most-unusual of teaching years, one problem we ran into was how to get evidence of learning, especially when students weren’t in class. The best solution I used was called My Time, the form students filled out to get equal credit by reading on their own and showing their understanding. Otherwise, the typical evidence I collected was fairly simple: upload/share a picture of the day’s “work” done in the notebook. At some point, though, I noticed that students weren’t reading daily from the digital class library—a major course expectation—so I replaced that weekly notebook pic with checking the digital library (Google Doc) and reporting how many days students accessed it. To my disappointment, though not to my surprise, very few students were spending any time at all in the Google Doc. Admittedly, there’s no way to know if the students who did WERE reading, and we gotta take that on faith, but the majority weren’t even accessing the document! So, effective immediately, I’m removing all expectations of students reading at home. This is BIG! However, I’m still maintaining the expectation of reading something old and something new, every day which means the adjustment is to build this into class time for about 5-10 minutes. This is different from FVR (Free Voluntary Reading), which lasts 15-20 on one to two days a week. I like “Free Reading Fridays” and then “Read Whatever Wednesdays” when it really gets rolling. Also, it doesn’t matter if a kid goes home to a peaceful room and naps, then spends hours reading for school, if they go directly to a part-time job, or if they take care of family members. This update is more equitable, and maintains a focus on reading. A simple Google Form follow-up (“What Did You Read?”) is evidence for the gradebook.
The concept is simple: you establish criteria students must meet in order to get an A in the class, but keep traditional assessments out of it, completely.
Sure, you can still give quizzes if you want. You can even score them and provide feedback, too. Truth is, none of that is necessary to set expectations for class, and for students to meet those expectations. Here’s the process…
**Any mention of Google Docs means them being used as screen share during Zoom—what was projected in class—NOT for any student editing.**
This year, I’m pushing the boundaries of streamlining teaching. For years, my students have used one rubric to self-assess one grade at the end of a term. Google Docs have always been my in-class-go-to for organization and providing input, but a few updates have resulted in magic…
I’ve had great success reporting scores of any homework, assignments, and quizzes in a 0% grading category portfolio, and then using those scores as evidence to double check and confirm each student’s self-assessed course grade based on Proficiency Rubrics. However, I’m constantly open to streamlining any teaching practice, so I’ve just updated my rubrics, distilling them into a single one. Students still self-assess their own estimated ACTFL Proficiency Level, but that level is independent from the grade they also self-assess. So, what’s the grade based on? Instead of proficiency, it’s based on course expectations of receiving input! After all, input causes proficiency, so why not go right to the source?
Move over Proficiency-Based Grading (PBG)! Hello…Expectations…Based…Grading (EBG)? It’s not as wacky as it sounds, trust me. In fact, it’s probably the least-restrictive grading practice next to Pass/Fail, yet still holds students accountable and provides all the flexibility I’ve enjoyed thus far. Here’s the rubric: