ALIRA: “All Your Datum Are Belong To Us…Plz?”

In the original draft of this post, I compared two data sets of students taking the ALIRA. However, I’m not really comfortable publishing that. I really don’t need anyone trying to play the victim when it’s been me going on a decade now defending my teaching practices and the kind of Latin that I read (and write) with students. It’s too bad, too, because the data are quite compelling. Some day, I’ll share the charts. Until then, you’ll have to take my word on it. You probably already know that I don’t fuck around, either, so my word is solid.

In short, the charts will contradict the claim that reading non-Classical Latin leaves students unprepared for reading Classical Latin. They will suggest that reading non-Classical Latin texts, such as those rife with Cognates & Latinglish via class texts and novellas, is of no disadvantage. They will also suggest that reading Classical texts is of no advantage. That’s all I’m prepared to share, for now.

Once a lot more data like these will be presented, though, the jury will start to come in on the matter of what kind of Latin prepares students for any other Latin. From what I’ve seen so far, it looks like A LOT of any Latin can prepare students to read other Latin, and that’s a good thing. These emerging data show that concerns and claims over certain kinds of Latin don’t play out in reality. Still, it’d be good to have more scores, not just the 532 ones currently submitted to that ALIRA form. If this all seems mysterious, it kind of has been. I haven’t shared the spreadsheet yet for viewing. That changes today!

Continue reading

A Year Of Grading Research: 30 Articles, 8 Books, 1 Pilot Study

You’re looking at my school desk. There’s some wormwood lotion for our desert-like winter classroom conditions here in New England, some peacock feathers (why not?), one of the deck prisms my great grandfather made in his line of work, the growing collection of my ancient wisdom series obsession, and what remains of this year’s unread novella order. What’s not there is the stack of articles and research reports that had been piling up since last spring. I’ve finally read them all during my planning periods. Of course, each report itself produced at least another to read, and often two or three more, making the review process more like attacking a hydra, but those are now tucked away in a “To Read/Review” folder in Drive. My desk is clear, and that’s enough of an accomplishment for me while teaching full-time. Aside from the reports, I’ve read 8 books, too:

  • Hacking Assessment 1.0 & 2.0 (Sackstein, 2015 & 2022)
  • Ungrading (Blum, 2020)
  • Point-less: An English Teacher’s Guide to More Meaningful Grading (Zerwin, 2020)
  • Proficiency-Based Instruction: Rethinking Lesson Design and Delivery (Twadell, et al. 2019)
  • Embedded Formative Assessment (Wiliam, 2018)
  • Assessment 3.0 (Barnes, 2015)
  • Grading and Reporting Student Progress in an Age of Standards (Trumbull & Farr, 2000)
  • Punished By Rewards (Kohn, 1993)

In case you’re wondering and were to ask for my current top five, which includes Grading for Equity (Feldman, 2018) that I read a couple years ago, it’d have to be Ungrading, Pointless, Punished by Rewards, and Hacking Assessment. Beyond the books, this year I also completed a small-scale pilot study, which I’ll be presenting at the CANE Annual Meeting. While not specific to Latin teaching, a case could easily be made that *any* grading research can apply to *every* content area. In fact, it’s somewhat remarkable what researchers have found, yet the profession just doesn’t seem to know. And there’s consensus. I’m not prepared to make sweeping claims and cite anything specific, but my impression of the consensus so far is:

  • Grading does more harm than most people think. It’s one of the few relics of antiquated education still practiced today en masse, in pretty much the same way, too. Considering everything that’s changed for educators in the past two, five, 10, 20, and 50 years even, now realize that the current dominant grading paradigm predates all of that. The fact that most grading systems are still based on the 0-100 scale with a “hodgepodge” of assessment products that are averaged together to arrive at a course grade is nothing short of astonishing.
  • Schools with a more contemporary (i.e., 30-year old) approach that claim to have standards-based learning (SBL) and grading (SBG) systems are actually still in their infancy, with some not really implementing the systems with much fidelity at all, thus, giving a lot of SBG-derived or SBG-adjacent practices a bad name. It’s mostly teacher/school misinterpretation and poor rollouts of these practices that render the efforts ineffective, not the practices themselves.
  • Gradelessly ungrading is probably the only sure bet for fixing the mess that grades have gotten us into. If you’re putting all your time and effort into SBG, I recommend that the second you understand the basics, see if you can skip right on over to a) using portfolios, b) getting rid of all those points, and c) having students self-assess & self-grade just once at the end of the term. You’re gonna need to provide a bit of feedback with this kind of system, too, so maybe try Barnes’ SE2R model.

Translating Isn’t The Problem

When the updated Standards for Classical Languages were shared, one key difference was the near-omission of the word “translating” as an active task, mentioned just once under a description of advanced learners at the postsecondary level (i.e, “Learners conduct research in the target language or assist in the translation of resources for the benefit of others.”), and then appeared in one example learning scenario submitted by a university professor. Granted, these standards have been in draft form—somehow—since 2017, but Latin teachers have been lauding that lack of “translation,” preferring nowadays that students focus more on reading Latin than doing translation exercises. However, it turns out that translating, per se, isn’t the problem…

Continue reading

Quizzing For Learning vs. Quizzing To Get A Grade

I was talking to a colleague about an assessment idea I had. The scenario began “if I were a math teacher…,” but really, this idea applies to anyone who gives quizzes. Many teachers I observe who assess like this usually hang out at their desk while students take the quiz. Sometimes it’s timed. Sometimes there are “after the quiz…” instructions on the board. In the literature, this is called an obtrusive assessment, with class on pause, sometimes the entire time.

So, if I were to ever assess like that, instead of hanging out at my desk, I’d start circulating the room, stopping at each student to point out a quiz item they should review (e.g., “Ja’den, spend more time on #3”). And I’d do this the entire time, just walking around, essentially doing all the correcting I would’ve done during my planning period, and even providing some feedback. It’s kind of like a more involved individualized Monitor Assessment. My colleague was wondering how this “real-time rolling assessment” would really show what students know and can do. We talked a bit. Questions were asked like “with so much scaffolding, how do we know the student can do anything on their own?” The truth is, they might not, but how is that any different? In fact, during that whole discussion I forgot to consider what the “real-time rolling assessment” was being compared to. That is, how is a give quiz/collect/correct/hand back procedure any different, really, for finding out what a student knows or can do?

It comes down to process.

Continue reading

Punished By Rewards & Advantages To The Single-Point Rubric

As if researching how to eliminate grading and reduce assessment couldn’t get much better, I’ve now got something else. Alfie Kohn’s 1993 masterpiece really ought to be required reading for every educator. Coming up on its 30 year anniversary, the author at the time reviewed studies dating just as far back to the 1960s. This post is gonna focus on self-assessments. When it comes to students self-assessing, evidence suggests that the more students think about HOW WELL they’re doing (vs. WHAT they’re doing), they do it poorly.

That’s crazy-unintuitive, right?!

Continue reading

Grades: Going, Going, Gone!

Here’s a quick report having gone nearly 100% gradeless. I say nearly because at my school, the halfway point of the quarter (i.e., progress reports) requires a grade. So, as of right now there’s a course grade that shows up. This practice isn’t quite in line with a true ungrading approach that would have a grade only at the very end of the grading period. I’m nearly there, and have a feeling this is as far as I’ll go, too. But that’s not a problem. There’s already been a big difference in the most important areas, and I expect things to get even better.

Continue reading

Averaging & Delayed Assessments

My interest in assessment & grading began shortly after the first few months of teaching right out of grad school. I noticed that some students did well with the content from the first few textbook chapters, but others didn’t do so well at all. Thus, beginning the year with low self-efficacy that was hard to turn around. By November, I realized that students were comfortable with the vocabulary and grammar from the first few chapters of the textbook. Then hit me; if I had just delayed those first assessments by a month or so, ALL STUDENTS would have aced them! What is more, the students who actually improved had that lower 1st quarter grade (e.g. C) averaged with the new, higher grade (e.g. A), producing a skewed reflection of their ability (e.g. B). None of this made sense; I was playing gods & goddesses with my students’ GPA.

I began researching how to arrive at a course grade that actually reflected ability—not just the averaging I was familiar with and somehow never questioned (or was even taught about in grad school). I spent months reading up on grading from experts like Marzano, O’Connor, and even some stuff from Alfie Kohn. I moved towards a system that showed where students were at the very moment of the grading term’s end without penalizing them for understanding the content slowly at first, or even having those bad days that students inevitably have. This was how I came to use Proficiency-Based Grading (PBG), and subsequently the kind of no-prep quizzes that haven’t added anything to my planning time in years.

If you’re ready for that, hooray! If not, at least consider 1) NOT averaging grades, as well as 2) delaying your assessments until students have already shown you that they understand the content!