Grading: A Zero-Autonomy Quick Fix

After reviewing my NTPRS 2018 presentation with someone earlier today, I stumbled upon a way to demystify the concept while also providing an option for immediate implementation without ANY changes to those pesky school-mandated, unchangeable grading categories (if you’re in that unlucky situation). In each grading category:

  1. Create assignments that do NOT count towards the final grade (usually a check box)
  2. Create ONLY ONE assignment that DOES count towards the final grade
  3. Use a—ANY—holistic rubric to arrive at that grading category grade

Continue reading

Input Expectations: The Updated ONE Rubric

I’ve had great success reporting scores of any homework, assignments, and quizzes in a 0% grading category portfolio, and then using those scores as evidence to double check and confirm each student’s self-assessed course grade based on Proficiency Rubrics. However, I’m constantly open to streamlining any teaching practice, so I’ve just updated my rubrics, distilling them into a single one. Students still self-assess their own estimated ACTFL Proficiency Level, but that level is independent from the grade they also self-assess. So, what’s the grade based on? Instead of proficiency, it’s based on course expectations of receiving input! After all, input causes proficiency, so why not go right to the source?

Move over Proficiency-Based Grading (PBG)! Hello…Expectations…Based…Grading (EBG)? It’s not as wacky as it sounds, trust me. In fact, it’s probably the least-restrictive grading practice next to Pass/Fail, yet still holds students accountable and provides all the flexibility I’ve enjoyed thus far. Here’s the rubric:

Capture

Continue reading

Averaging & Delayed Assessments

My interest in assessment & grading began shortly after the first few months of teaching right out of grad school. I noticed that some students did well with the content from the first few textbook chapters, but others didn’t do so well at all. Thus, beginning the year with low self-efficacy that was hard to turn around. By November, I realized that students were comfortable with the vocabulary and grammar from the first few chapters of the textbook. Then hit me; if I had just delayed those first assessments by a month or so, ALL STUDENTS would have aced them! What is more, the students who actually improved had that lower 1st quarter grade (e.g. C) averaged with the new, higher grade (e.g. A), producing a skewed reflection of their ability (e.g. B). None of this made sense; I was playing gods & goddesses with my students’ GPA.

I began researching how to arrive at a course grade that actually reflected ability—not just the averaging I was familiar with and somehow never questioned (or was even taught about in grad school). I spent months reading up on grading from experts like Marzano, O’Connor, and even some stuff from Alfie Kohn. I moved towards a system that showed where students were at the very moment of the grading term’s end without penalizing them for understanding the content slowly at first, or even having those bad days that students inevitably have. This was how I came to use Proficiency-Based Grading (PBG), and subsequently the kind of no-prep quizzes that haven’t added anything to my planning time in years.

If you’re ready for that, hooray! If not, at least consider 1) NOT averaging grades, as well as 2) delaying your assessments until students have already shown you that they understand the content!

NTPRS 2017: 10 Workshops On Assessment & Grading!

Assessment & Grading is, by far, the most frequent topic I’m asked about, and this year’s National TPRS Conference features 10 of those workshops on Thursday and Friday! Based on the descriptions, there’s a mix of proficiency people, skill people, tech-tool people, speaking people, rubric people, and more! I’ll be presenting one of those workshops, and have noticed that my thinking is a little different. I do recommend getting to as many of the 10 as you can, so in case you miss out on mine, here’s a brief look at what I’m about…

RLMTL
I have a very simple approach to assessment because the answer is always RLMTL (i.e. Reading and Listening to More Target Language). That is, there is NO assessment I could give that WOULD NOT result in me providing more input. Therefore, my assessments are input-based, and very brief. In fact, what many consider assessments—for me—are actually just simple quizzes used to report scores (see below).

I prefer to assess students authentically.

Continue reading

Assessment & Grading: Game Changers

When teachers complain about their certain practices that create more work for themselves and take time away from students acquiring the target language, my response is usually “well then, don’t use them.” Follow the logic below to arrive at why you need to wrap your head around changing Assessment & Grading practices so that you can use your prep/planning time, and personal life,  for more useful and enjoyable endeavors…

Continue reading

2016-17 DEA

**See this post for all other grading schemes*

In its current form, there are only 3 agreements as part of the Daily Engagement Agreements (DEA), which are to Look, Listen, and Ask. Older versions of DEA had many more, but the 0% Portfolio grading category I now include Powerschool takes care of assignments previously covered under “Be Prepared,” and anything else I need to keep track of.  There’s no need for “No English” because “Listen” covers that. There’s no need for posture agreements because “Look” covers that. Last week a student was lying down between two chairs yet could read the board and was responding with the entire class. This kid understood Latin and was participating…he was just tired. An older system would have made that an issue when there wasn’t an issue. For me, DEA is super streamlined at this point, which means super clear for DAPS (department heads, admin, parents, students).

In terms of weighting, I ended up using last year’s sliding scale idea. Previously, I’ve written how my DEA weight had been anywhere from 0% to 50% of the grade. Colleagues at my new school liked the new sliding scale, but were a little uncomfortable with the 100/0 and 0/100 percentages at the start and end of the year. No problem. After a simple edit, the scale does slide, but at a 90/10, and 10/90 split to include at least a little bit of both DEA and Proficiency. I like this one because DEA now holds most of the weight for half the year, and is equal to Proficiency in 3rd quarter. After all, if students are Looking, Listening, and Asking when they don’t understand, they’ll acquire enough language to “understand most of what they hear and read,” which is honestly the most realistic expectation we could have, and is reflected in that 90% Proficiency weight in June.

N.B. if, somehow, students don’t Look, Listen, or Ask and STILL understand, just don’t take off DEA points!

Quarter 1
DEA = 90%
Proficiency = 10%

Quarter 2
DEA = 75%
Proficiency = 25%

Quarter 3
DEA = 50%
Proficiency = 50%

Quarter 4
DEA = 10%
Proficiency = 90%

 

Grading vs. Reporting Scores: Clarification

In the recent sliding scale scheme, Proficiency is given 0% weight at the start of the year. This doesn’t mean that students see “0” in the gradebook. What this means is that their 95, for example (which they see in the gradebook), holds 0% weight because in the sliding scale scheme we’ve placed all 100% weight on DEA for first quarter in order to set expectations and establish routines. By the fourth quarter, 100% of the weight is on Proficiency, and whenever possible, we manually change the entire course grade to that final Proficiency number/letter so nothing else averages throughout the year.