We Should Grade Performance & Competency…Shouldn’t We…?!

Someone in my grad program recently mentioned how grading should be completely based on what students can do. This idea was challenged by another who said that it certainly makes sense if you’re “the last step” before a career (e.g., administering licensing tests, or proving you can do an actual job via some performance), but what about when students are still in the learning phase? This was a good point. How long does a typical learning phase last before you’d expect, or even need to grade performance & competency? What if you—the person ultimately responsible for that grade—are not “the last step?”

What if you’re a college instructor for a 100-level survey course? What if you’re a 10th grade math teacher? What if you’re a middle-school science teacher? What if you’re an elementary school reading specialist? Surely, a high-functioning society doesn’t rely on any of these people giving summative grades based on performance & competency as if it were “the last step.” Placing these kind of obstacles during the learning process long before the rubber hits the road isn’t something we should be doing.

This deserves some thought…

Continue reading

Year In Review: Updated Grading w/ Standards

My one-standard-self-assessed grading system of receiving input (re: Input Expectations rubric) has been working out just fine for several years now. “Fine” is…well…fine…but we as educators should be open to refining practices whenever we get new data, especially whenever “fine” has the opportunity to become something awesome. This year I was able to do something better, getting ever so close to that awesome. If what I’ve been doing could be considered 85% of the way towards equitable, time-saving grading that shifts focus to learning, I’m now at probably 90%.

These updates are the result of some research I’ve been doing using primary sources from Grading For Equity (Feldman, 2018), Fair Isn’t Always Equal (Wormeli, 2018), Assessment 3.0 (Barnes, 2015), Hacking Assessment (Sackstein, 2015), and Ungrading (Blum & Kohn, 2020), along with 20 or so additional research reports on related topics. Updates included introducing new standards one-by-one, and their values changed throughout the year. The system also moved from 100% self-graded to 100% teacher-graded. I’m keeping some of these updates for next year, but more on that later on. Let’s take a look at those standards, first…

Process
Process refers to the things students *must* do to acquire language. It’s basically what the Input Expectations rubric was all along years before. Rather than a set of bias-ridden controlling rules that have circulated the language teaching profession for some time, though (e.g., “eyes up front, nothing in laps/on desk, intent to understand,” etc.), there is no dispute that students need input, and there are only three modes of doing so: reading, viewing, and listening. That’s it, and there’s no way to distill it further. Since a focus on providing input requires plenty of time and energy, there’s not much convincing reason to do or grade much else. Therefore, my grading system aligns with the instructional design 1:1.

The processes found in the older Input Expectations rubric have been 100% of my students’ grade for years. Students self-assess how well they’ve been receiving input about four times per year, and that’s it. How do they receive input? Well, I’ve been working under the Look, Listen, Ask framework, but have now separated out the latter into a workflow of Respond/Show/Ask. If students Respond (target language takes priority, but English is fine), we’re good to go. If they can’t, students Show their understanding (e.g., gestures, expressions, etc.). If students can’t do either, though, then it’s time to Ask. This update has the benefit of getting more engagement from students without requiring some kind of “choral response” rule. Also, the students who can respond—but choose not to—start to realize it’s easier to do so rather than having me check their comprehension (because I didn’t get any data and no data is bad).

Continue reading

CI, Equity, User-Error & Inequitable Practices

I don’t agree that the statement “CI is equitable” is harmful. Yet, I also don’t think the message behind “CI isn’t inherently equitable” is wrong, either. John Bracey said one can still “do racist stuff” while teaching with CI principles. Of course, we both know that’s an issue with content, not CI. Still, I get the idea behind that word “inherent.” In case you missed the Twitter hub bub, let me fill you in: People disagree with a claim that CI is “inherently equitable,” worried that such a message would lead teachers to say “well, I’m providing CI, so I guess I’m done.” I don’t think anyone’s actually saying that, but still, I understand that position to take.

Specifically, the word “inherent” seems to be the main issue. I can see how that could be seen as taking responsibility away from the teacher who should be actively balancing inequity and dismantling systemic racism. However, teachers haven’t been as disengaged from that equity work as the worry suggests. I’ve been hearing “CI levels the playing field” many times over the years from teachers reporting positive changes to their program’s demographics. What else could that mean if not equity? But OK, I get it. If “inherent” is the issue, maybe “CI is more-equitable” will do. If so, though, at what point does a teacher go from having a “more-equitable” classroom to an “equitable” one? And is there ever a “fully-equitable” classroom? I’m thinking no. So, if CI is central to equity—because you cannot do the work of bringing equity into the classroom if students aren’t understanding (i.e. step zero), and nothing has shown to be more equitable than CI, well then…

For fun, though, I’ll throw in a third perspective. Whereas you have “CI is equitable” and “nothing makes CI equitable per se,” how about “CI is the only equitable factor?” I’m sure that sounds nuts, but here it goes: Since CI is independent from all the content, methods, strategies, etc. that teachers choose, as a necessary ingredient for language acquisition, CI might be the only non-biased factor in the classroom. Trippy.

I don’t think that third perspective is really worth pursuing, though, so let’s get back to the main points. Again, I understand the message behind “CI isn’t inherently equitable” as a response to “CI is equitable.” However, I suspect the latter is said by a lot of people who aren’t actually referring to CI. Don’t get me wrong; some get it, and are definitely referring to how CI principles reshaped their language program to mirror demographics of the school. However, others are merely referring to practices they think is “CI teaching.” This will be addressed later with the Dunning-Kruger Effect. Otherwise, let’s talk equity…

Continue reading

Grading: A Zero-Autonomy Quick Fix

After reviewing my NTPRS 2018 presentation with someone earlier today, I stumbled upon a way to demystify the concept while also providing an option for immediate implementation without ANY changes to those pesky school-mandated, unchangeable grading categories (if you’re in that unlucky situation). In each grading category:

  1. Create assignments that do NOT count towards the final grade (usually a check box)
  2. Create ONLY ONE assignment that DOES count towards the final grade
  3. Use a—ANY—holistic rubric to arrive at that grading category grade

Continue reading

Input Expectations: The Updated ONE Rubric

I’ve had great success reporting scores of any homework, assignments, and quizzes in a 0% grading category portfolio, and then using those scores as evidence to double check and confirm each student’s self-assessed course grade based on Proficiency Rubrics. However, I’m constantly open to streamlining any teaching practice, so I’ve just updated my rubrics, distilling them into a single one. Students still self-assess their own estimated ACTFL Proficiency Level, but that level is independent from the grade they also self-assess. So, what’s the grade based on? Instead of proficiency, it’s based on course expectations of receiving input! After all, input causes proficiency, so why not go right to the source?

Move over Proficiency-Based Grading (PBG)! Hello…Expectations…Based…Grading (EBG)? It’s not as wacky as it sounds, trust me. In fact, it’s probably the least-restrictive grading practice next to Pass/Fail, yet still holds students accountable and provides all the flexibility I’ve enjoyed thus far. Here’s the rubric:

Capture

Continue reading

Averaging & Delayed Assessments

My interest in assessment & grading began shortly after the first few months of teaching right out of grad school. I noticed that some students did well with the content from the first few textbook chapters, but others didn’t do so well at all. Thus, beginning the year with low self-efficacy that was hard to turn around. By November, I realized that students were comfortable with the vocabulary and grammar from the first few chapters of the textbook. Then hit me; if I had just delayed those first assessments by a month or so, ALL STUDENTS would have aced them! What is more, the students who actually improved had that lower 1st quarter grade (e.g. C) averaged with the new, higher grade (e.g. A), producing a skewed reflection of their ability (e.g. B). None of this made sense; I was playing gods & goddesses with my students’ GPA.

I began researching how to arrive at a course grade that actually reflected ability—not just the averaging I was familiar with and somehow never questioned (or was even taught about in grad school). I spent months reading up on grading from experts like Marzano, O’Connor, and even some stuff from Alfie Kohn. I moved towards a system that showed where students were at the very moment of the grading term’s end without penalizing them for understanding the content slowly at first, or even having those bad days that students inevitably have. This was how I came to use Proficiency-Based Grading (PBG), and subsequently the kind of no-prep quizzes that haven’t added anything to my planning time in years.

If you’re ready for that, hooray! If not, at least consider 1) NOT averaging grades, as well as 2) delaying your assessments until students have already shown you that they understand the content!

NTPRS 2017: 10 Workshops On Assessment & Grading!

Assessment & Grading is, by far, the most frequent topic I’m asked about, and this year’s National TPRS Conference features 10 of those workshops on Thursday and Friday! Based on the descriptions, there’s a mix of proficiency people, skill people, tech-tool people, speaking people, rubric people, and more! I’ll be presenting one of those workshops, and have noticed that my thinking is a little different. I do recommend getting to as many of the 10 as you can, so in case you miss out on mine, here’s a brief look at what I’m about…

RLMTL
I have a very simple approach to assessment because the answer is always RLMTL (i.e. Reading and Listening to More Target Language). That is, there is NO assessment I could give that WOULD NOT result in me providing more input. Therefore, my assessments are input-based, and very brief. In fact, what many consider assessments—for me—are actually just simple quizzes used to report scores (see below).

I prefer to assess students authentically.

Continue reading

Assessment & Grading: Game Changers

When teachers complain about their certain practices that create more work for themselves and take time away from students acquiring the target language, my response is usually “well then, don’t use them.” Follow the logic below to arrive at why you need to wrap your head around changing Assessment & Grading practices so that you can use your prep/planning time, and personal life, for more useful and enjoyable endeavors…

Continue reading

2016-17 DEA

**See this post for all other grading schemes*

In its current form, there are only 3 agreements as part of the Daily Engagement Agreements (DEA), which are to Look, Listen, and Ask. Older versions of DEA had many more, but the 0% Portfolio grading category I now include Powerschool takes care of assignments previously covered under “Be Prepared,” and anything else I need to keep track of.  There’s no need for “No English” because “Listen” covers that. There’s no need for posture agreements because “Look” covers that. Last week a student was lying down between two chairs yet could read the board and was responding with the entire class. This kid understood Latin and was participating…he was just tired. An older system would have made that an issue when there wasn’t an issue. For me, DEA is super streamlined at this point, which means super clear for DAPS (department heads, admin, parents, students).

In terms of weighting, I ended up using last year’s sliding scale idea. Previously, I’ve written how my DEA weight had been anywhere from 0% to 50% of the grade. Colleagues at my new school liked the new sliding scale, but were a little uncomfortable with the 100/0 and 0/100 percentages at the start and end of the year. No problem. After a simple edit, the scale does slide, but at a 90/10, and 10/90 split to include at least a little bit of both DEA and Proficiency. I like this one because DEA now holds most of the weight for half the year, and is equal to Proficiency in 3rd quarter. After all, if students are Looking, Listening, and Asking when they don’t understand, they’ll acquire enough language to “understand most of what they hear and read,” which is honestly the most realistic expectation we could have, and is reflected in that 90% Proficiency weight in June.

N.B. if, somehow, students don’t Look, Listen, or Ask and STILL understand, just don’t take off DEA points!

Quarter 1
DEA = 90%
Proficiency = 10%

Quarter 2
DEA = 75%
Proficiency = 25%

Quarter 3
DEA = 50%
Proficiency = 50%

Quarter 4
DEA = 10%
Proficiency = 90%

 

Grading vs. Reporting Scores: Clarification

In the recent sliding scale scheme, Proficiency is given 0% weight at the start of the year. This doesn’t mean that students see “0” in the gradebook. What this means is that their 95, for example (which they see in the gradebook), holds 0% weight because in the sliding scale scheme we’ve placed all 100% weight on DEA for first quarter in order to set expectations and establish routines. By the fourth quarter, 100% of the weight is on Proficiency, and whenever possible, we manually change the entire course grade to that final Proficiency number/letter so nothing else averages throughout the year.