JDGI: A New Equitable Grading Acronym

I just spent 12 hours in Grading for Equity Virtual Institute, and my main takeaway comes down to what I’m calling JDGI. After all, no one wants to be judgy, right? Therefore, JDGI is a handy acronym for how to grade equitably. I’m not going to spoil the book, institute, or any of the work that Joe Feldman and Dr. Shantha Smith have been doing, but the basic idea is:

Just Don’t Grade It

Do you give homework? Fine, just don’t grade it. Do you expect students to participate? What if your idea of participation is biased? But OK fine, just don’t grade it. Are you under the impression that effort is observable and measurable? Might wanna check yourself on that one, but alright, just don’t grade it. Do you give tests that result in scores of X/Y points (e.g. 7/10, or 89/100)? Yeah, just don’t grade it (i.e., ditch those points in place of something like concept checklists the student showed they understood).

So yeah, you gotta grade something else…

Overall, the institute confirmed a lot of what I’ve known and shared on this blog for years. Of course, I didn’t just get lucky ending up with more streamlined (and equitable) grading practices. I based most of my practices on research of Marzano, O’Connor, and Dueck, whose work was foundational for Joe et al. So, if you’re looking for any more convincing evidence to stop averaging grades, giving group grades, extra credit, zeros, and begin using 50/55 as lowest grade, 0-4 scale, accepting late work, and getting into grading standards, Grading for Equity has it all right there. Check it out. They have the prologue and Chapter 1 for you to read, gratis.

Perhaps the most notable equitable grading practice we share is the concept of 0% portfolio + 100% single grading category. I’ve known for a long time that the fewer categories, the better (e.g., two Formative/Summative categories vs. five Homework/Participation/Quizzes/Tests/Projects), but I didn’t know that others went as far as I did with just a single category. Joe calls this the “standards” category, and all the other evidence in the gradebook is used to show that students meet those standards. N.B. This is a much clearer way to conceptualize standards-based grading (SBG), probably because most gradebooks I’ve seen have 10+ standards each term. Anyway, as the teacher gets new evidence, they update the standards to reflect where the student is currently at. The benefit with this portfolio is that there’s no averaging individual assignments, so all those low scores in September don’t mix with high scores of October that otherwise result in a misleading, arbitrary number. Just use the most recent performance, and you’re golden! N.B. The only averaging that happens in the gradebook is across the standards. This is how we arrive at one term grade to represent progress. So, if you’ve got 3 standards, scores of A, B, and C would result in a term grade of B. If you’ve got 8 standards with scores of A, A, A, A, F, F, F, F, the grade would be a C.

That portfolio evidence really is key to making this work. It could be anything, too, making grading super flexible, which is what teachers (and students) need. For example, the teacher could determine that for any assignment, classwork, test, etc., a student has met Standard 1.Whatever, and update that grade in the 100% standards column. Don’t worry about any previous low scores in the portfolio. For any given standard you’re assessing, you’re looking for the most-recent performance, which is likely to be higher as a summative. If it’s not, don’t count it! That student is still learning, which is formative. In other words, the formative/summative distinction is arbitrary. In reality, it’s all about where the individual is in their learning!

What standards? How many?
I cannot say that any world language standards have been helpful when it comes to teaching content and grading anything about the language learner. Sorry, not sorry ACTFL, or MA Frameworks. In spite of all those fancy documents, quite simply, kids are learning words to communicate in the classroom while being entertained, learning something about themselves and the world, and creating something together. That’s pretty much it. No need to overcomplicate things. Other content areas certainly have it better. Seek out those standards, especially if you’ve already worked with them for curricular documents. Once you have them, narrow which ones to grade. Remember, JDGI: not every standard has to be graded! Does your content area have 32 standards like science did? K, identify maybe 3-5 each quarter to grade. Does your content area have only a handful that are the entire year’s focus? Include them in the standards column each term, and report the ones you get evidence for as you go throughout the year. They don’t have to change, any more than they don’t have to remain the same. It depends on your content and standards. The system is actually quite easy.

There’s an example below that doesn’t require using a fancy standards-based grading (SBG) version of your gradebook. You can make it happen with what you’re already familiar with. Just create your Standards category and set it to 100%, then your portfolio category (this teacher calls it “assignments”) and set it at 0%. Then, create an assignment for each of your standards dated the start of each term, and add assignments for whatever it is students do that shows they’re meeting the standards throughout the term, updating the standard column as you get the evidence. This ensures that assessments are actually good. For example, if you’re testing seven different standards on one quiz, and everything’s grouped by question type (e.g., multiple choice, open-ended), that’s gonna be tricky to really hone in on what a student needs help with. Assess just one thing, or at least one thing per section.

Now, the only math calculation going on in the gradebook is the term grade (shown as S2 Final Grade below), which is an average of all the standards in the 100% Standards category. Yes, averaging is bad, but most schools need a single number/letter to reflect the grading term, so the compromise is averaging just the standards. Therefore, keep the standards to a minimum. For example, the teacher who assesses eight different standards all averaged together gives a less-valid reflection of a student’s performance compared to a teacher with three or fewer. I like how this teacher below has things set up. It’s clear. It’s simple. It’s easy to work with, and the focus is more on learning and content, less on grades and individual assignments. The only other piece you might need to really make this work is retesting to allow ALL students to show mastery. If you make it optional, that’s inequitable (i.e., the higher achievers will likely be the only ones who opt to retest). So, add sections of previous standards to current assessments, and/or build in class time for any retests. Require retests for students who don’t show mastery.

But…
I still don’t think this is best for second languages. Why? The most recent performance used in standards-based grading (SBG) is a summative evaluation, which by definition is “at the end of learning.” The buzz word with this is “mastery.” That doesn’t work for a second language. When is the end of learning?! When do our students master the target language?! Let’s compare that to a science unit on chemical elements of life. After weeks of learning experiences, a test could be given. Learning occurred, and it’s time for students to show their understanding of CHNOPS, right? That’s easy to conceptualize; learners engage with the material, and develop understandings. For languages, though, there isn’t an end unless we artificially create a unit of content to test just like science. In a communicative classroom, learners are acquiring words in order to be entertained, learn, and create. Furthermore, learners can engage with language, yet develop at vastly different rates. Therefore, I recognize two inequitable practices when attempting to frame the learning process as students having “mastery” of language:

  1. If we test content, such as an understanding of target culture via the target language, we ignore acquisition of language and the learner’s internal syllabus.
  2. If we test anything about acquired language, such as understanding specific words/phrases, we also ignore the internal syllabus.

In other words, students could be acquiring the language just fine yet a content assessment could suggest otherwise, just as students could be acquiring the language just fine, but a test of what’s acquired could suggest otherwise. We cannot punish slower processors any more than we can reward the faster ones. That’s the very definition of inequality. As far as I can tell, the only way this doesn’t impact other content areas is because the learning is happening via a known language. Aside from specialized vocabulary, the nuts and bolts of meaning isn’t an obstacle. However, when the language itself is the thing being learned, that’s a whole new ball game. Of course, some teachers play that same game by different rules…

Same game, different rules (literally!)
The way language teachers often assess “mastery” is focusing on knowledge and skills, either testing grammar, or speaking/writing. On the surface, this makes sense: either a kid can identify pluperfect verbs, or they can’t. They can ask someone for the bus schedule, or they can’t. The former grammar knowledge is just something that privileges anyone with a good memory, has no guarantee of resulting in any proficiency, and is basically as inequitable as you can get. The latter kind of skill—as we’ve been warned—is usually just language-like behavior, especially when announced ahead of time as some formal test-taking situation. That is, teachers grading those kinds of standards rarely do so unannounced, and rarely get spontaneous production. All those “can dos” are more like “maybe dos.”

Therefore, what are equitable language standards we can actually test?

I stick with the processes all humans need to acquire language. They’re included in what I’ve been calling “expectations-based grading.” No one says kids don’t need input, and all humans are ready to receive it, right? Well, the things students do to receive input *are* my standards. They’re actually universals! Now, I could just as easily rename that as “input standards” or something, then list them at the start of my gradebook each term. However, I’m still not sold that I have to go through any of those charades whatsoever. Let’s think about it: the result of any assessment—other than real-time interaction in which the adjustment is making the language more comprehensible for the learner—is that the learner needs to read and listen to more target language. That’s *never* not the answer, and all I can do is provide as much of that as possible. Solved, no testing needed here. I know—for a fact—that some kids will understand more of the target language, and others will understand less. I don’t need a test to tell me who’s who, because it doesn’t matter. Students could be doing everything we…expect…a human to do to acquire language, and yet test results could vary. Attaching a grade to any of those tests is inequitable.

In sum, I’ll be sticking with my self-assessment grading system with input front and center, but it’s great to know that I can explain why most grading practices are inequitable, and help teachers of other content areas out of that rut.

5 thoughts on “JDGI: A New Equitable Grading Acronym

  1. I love this book! I just did a whole presentation for my Ed.S. with it! LOL I love learning from others who want to use it in the language classroom. Thanks!

  2. I love the simplicity of this for ourselves and students. How will you be setting up categories or naming assignments that go in your gradebook?

  3. I’m so confused by this. You write about all the things we shouldn’t assess but, nothing about what we should assess (I think…?)

    • Oh…it’s standards. See the second half of the blog post for that. Choose them wisely, though. I share how the ACTFL, etc. hasn’t helped me at all, and so my standards are look, listen, and ask…you know…the things humans *must* do to acquire (i.e. get input from somewhere, process it, and ensure it’s comprehensible).

      Here’s the closing of the blog post:

      “I stick with the processes all humans need to acquire language. They’re included in what I’ve been calling “expectations-based grading.” No one says kids don’t need input, and all humans are ready to receive it, right? Well, the things students do to receive input *are* my standards.”

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.