I’ve had great success reporting scores of any homework, assignments, and quizzes in a 0% grading category portfolio, and then using those scores as evidence to double check and confirm each student’s self-assessed course grade based on Proficiency Rubrics. However, I’m constantly open to streamlining any teaching practice, so I’ve just updated my rubrics, distilling them into a single one. Students still self-assess their own estimated ACTFL Proficiency Level, but that level is independent from the grade they also self-assess. So, what’s the grade based on? Instead of proficiency, it’s based on course expectations of receiving input! After all, input causes proficiency, so why not go right to the source?
Move over Proficiency-Based Grading (PBG)! Hello…Expectations…Based…Grading (EBG)? It’s not as wacky as it sounds, trust me. In fact, it’s probably the least-restrictive grading practice next to Pass/Fail, yet still holds students accountable and provides all the flexibility I’ve enjoyed thus far. Here’s the rubric:
I was inspired to update my rubrics when I realized I’ve been constantly reinforcing rules because students have Latin just 1x/week, saying “you really only need to do 3 things in this class in order to be successful; Look, Listen, and Ask (when language isn’t clear)!” These are my DEA agreements, my class rules. It hit me that they’re my rules for a reason, so I might as well focus on that reason to the point of making it the whole grade. In fact, DEA has been worth 100% of a students grade in the past. This update isn’t all that different, but the focus on receiving input supports teaching practices as well.
Students need input, and they might be doing (or not doing) something to receive it. The result of input is proficiency, though it varies from student to student. This individual variance hasn’t been an issue when grading using the Proficiency Rubrics because they’re based on realistic goals. As a result, most students have been meeting those goals and getting As and Bs. Still, even using this reasonable grading system, there have been times when students have made it hard for me to provide CI on a daily basis, and even harder for themselves to receive it. The updated rubric addresses this. Of course, I blame an educational system in which students often confuse rigor and challenge with class being difficult and painful, and that if class isn’t difficult they don’t need to listen, etc. These damaged students—and they ARE damaged because they no longer feel any joy from learning—are often held hostage by grading. As one teacher posted to Facebook recently, their students “don’t do anything unless they know something counts towards their grade.” Instead of holding them hostage, though, we can combine a reasonable grading system with slightly more accountability, perhaps the final touch that’s been missing from the Proficiency Rubrics.
What accountability? Training students to receiving CI in a classroom requires following routines and meeting other expectations. Some might call these behaviors. I don’t see it that way. And no, if a neurodivergent kid has outbursts it will NOT affect their grade. Accommodations are made, and those accommodations apply to grading rubrics, etc. Instead of grading behaviors, habits, or any rules used to create this ideal environment for acquisition, the updated rubric grades the result of potentially not doing those things. This is close to when I used to grade the Daily Engagement Agreements (DEA), except that there’s now ZERO work and record keeping to be done. Still, some see this as behavior, and/or compliance. That’s OK. It might be a fine line, but again, there’s no denying that 1) students need input, and 2) they might be doing (or not doing) something to receive it. It makes sense, then, to grade what students do according to how we provide CI in our classroom. N.B. grading systems in danger of measuring behavior usually involve students meeting expectations, just acting a different way (i.e. constantly chatting with neighbors, yet acing every quiz). I agree that this kind of grading is unethical. However, there is no scenario in which a student can actually receive a sufficient amount of input while not following the criteria on the rubric.
Rather than divide things into different grading categories, the updated rubric is so global that it takes into account anything students do (or don’t do) in your classroom. For example, if you give input-based homework, yet students don’t do it (because the grading category is only 10% under your current system), those students aren’t receiving as much input, plain and simple. This has a direct result on how well they will understand the target language. Sure, individual acquisition rates vary, but it’s all contingent upon input (e.g. a fast processor who doesn’t listen or read won’t increase their proficiency, while a slow processor who does listen and read might surpass them from all that input!). So, with the rubric, not doing homework could result in a grade of 85 to 55 depending on how important that input outside of class is for you, and how often the student doesn’t do it. Why such a broad range? The “and/or” detail in the rubric allows you to focus on all, or just one of the criteria. Think of it as detailed feedback, almost along the lines of the single-point rubric, celebrating strengths, and identifying weaknesses. How does it work? For example, a student who always does homework, but sometimes doesn’t listen to you in class would have an 85 because they’re missing out on input. Show that to the student so they can begin meeting expectations. Another student might listen well in class, but sometimes not do homework. If that’s important to you/your department, and the homework really is input-based, the student would also have an 85 for missing out on input, but for slightly different reasons than that other classmate. It’s important to recognize that these rubrics are NOT designed to fail students! They do, however, provide a bit of leverage when dealing with those thinking your class is easy-peasy-no-need-to-pay-attention, etc., and come in handy during meetings with DAPS (department leaders, admin, parents, students). Remember, since you’ve established expectations in the course syllabus, and you have the 0% grading category portfolio, you have evidence to support the grade. Here’s another example of this rubric at work…
Consider a student who constantly disrupts class (yet not enough for disciplinary action), blows off class work, and as a result understands a bit less than other students. When she self-assesses a grade of 95, you can point to that 85 criteria and remind her that she meets expectations sometimes, and that if she wants an A, she’ll have to start meeting them more often. This isn’t a student who processes language slowly due to internal constraints. This is a student who’s doing (and not doing) something that will keep her proficiency low unless something changes, as reflected in the criteria. I chose the criteria I did based on my experience with students 1) not listening, 2) missing class, and 3) not reading texts. I encourage you to identify what prevents your students from receiving input the most, and then establish your own criteria for meeting expectations.
You’ll also notice that I only include comprehension descriptions for the estimated ACTFL Proficiency Levels. For years, my rubrics have included “how well students are understood in the target language.” However, since spoken and written proficiency is a result of listening to and reading the target language, testing and grading how well students speak/write isn’t necessary. Of course, we COULD, but we don’t HAVE TO, at least while students are still acquiring the language. Thus, I choose not to.