Flex Time & Google Days

“You teach the kids you have.” I like this nugget of wisdom. It doesn’t matter if previous classes of students did this or that. Everyone must teach the students they have in the room, not anticipated students, or former students. Sometimes what the students in the room don’t know can be surprising, but the only thing that matters is what we do about it. For example, I’ve been perplexed by the lack of digital literacy I’ve been seeing with incoming 9th grade students. Rather than shake my head, pretending that lack of skill isn’t my problem, I’m going to do something about it. I’m going to do something even if it has less to nothing to do with Latin. Why? Because I teach the kids I have, and these kids need to be able to navigate Google Classroom, and I’m tired of pretending it’s fine. The plan? Each week, students will have 20 minutes to organize their learning after another 20 minute independent learning session. The latter part isn’t really new, so let’s start with that:

Flex Time
This independent learning time worked out really well last year. I checked my planning doc and saw that between December and June we had Flex Time a total of eight times. I’ve curated the options, most recently removing Quizlet since I find it less useful when not immediately followed by a whole-class Live session before reading the text. New for this year will be to encourage an ongoing project. Is the goal to read as many novellas as possible? Is the goal to work through an entire textbook? Is the goal to learn about a specific Latin-related topic? Instead of bouncing around the Flex Time options every few weeks or so, students will now choose an ongoing option for this new weekly routine every Wed/Thurs. Yes, they can switch if they really want to, just as long as they reflect why (e.g., “I liked the idea of having textbook structure, but I think Caecilius is boring.”).

Google Days
The second half of Wed/Thurs each week gives students time to check feedback and submit learning evidence (Google Classroom) for Latin class. Once done, or if already caught up, the remaining time is for checking school email (Gmail) and responding to other needs, such as correspondence with teachers, and/or completing other class Google Classroom assignments. No, it does not bother me if a student ends up doing 8 minutes of math at the end of Latin. I’m teaching the students I have, and it’s clear that they need something like this. What I will do is make sure this rolls out smoothly. What I won’t do is hang out at my desk and overestimate my student’s independent learning capability. This kind of work with 9th grade requires heavy monitoring, not unlike the first minutes of independent reading. That is, if I think students are going to magically grab a book and be quiet on their own within 10 seconds, I’m fooling myself. Yet every time I take those first moments to ensure the majority of students—yes, majority, because we can’t have it all, all the time, everywhere, all at once—settle into a task, I’m always rewarded with my own quiet time to read, with the occasional look up, make eye contact, and stare down the kid who’s goofing off until they get back to the book. It works. You just have to commit to both: monitor the room, getting kids on task at the start of an activity, and being unwavering with a teacher look at the ready.

So, the second 20 minutes of Wed/Thurs is also for students to add learning evidence, submitting work from the previous week in addition to what they did during Flex Time. For example, they could attach a notebook pic from Mon/Tues annotation task, as well as a statement about something they learned from their Flex Time findings, how much they read of a book, what they were working on, etc.

Pedagogical Immunity

Certain learners exist who possess what seems like complete immunity to whatever pedagogy they’re subjected to. College students are a good example. Professors rarely have pedagogical training, which is perhaps the most ironic thing about those in charge of training pre-service primary and secondary teachers, but most college students are able to persist through a lack of solid pedagogy. How? Using their interests, some independent learning skills, and a bit of determination. Polyglots are another good example. They’ll learn many languages under all sorts of conditions that don’t transfer to others, claiming they found “the secret,” yet relatively few who adopt their “methods” report success (except for other…polyglots!). Upon thinking this over, many high school students—and not just those studying a second language—are often pedagogically immune, too. These students manage to pass courses even when teachers have wacky pedagogy with unhelpful practices. Consider the teacher using some pre-fab curriculum with loads of busywork. Students will put up with all that busywork. They might not learn much, but they’ll earn credit, then graduate. In that sense, then, these students made it through. They were immune (though not to learning…which we’ll get to). They just made it past the next level. They…”succeeded.”

Continue reading

How To Ungrade Gradelessly In Two Steps

I’ve been told that going gradeless and ungrading are different. While that’s certainly possible, I haven’t seen a clear difference so far. That is, between blogs, Facebook groups, books, and the rare research report under either term (plus more), the similarities stand out way more than any notable differences. There’s quite a bit of consensus among even the most discerning of grading systems related to reducing or eliminating grades. Even a few systems that fall under a generic “standards-based” approach have basically the same features as those that fall under the “gradeless/ungraded.” Whatever you want to call these approaches, this post will show you how to get rid of all the points, scores, and assignment grades while keeping the focus on learning. There are two basic steps:

  1. Have students put all their classwork, assignments, and assessments into a portfolio.
  2. Students self-grade, citing evidence from the portfolio.
Continue reading

Documentation Of Participation vs. Evidence Of Learning

I came across a 1993 article on student self-reporting (Darrow, et al.), and spent some time thinking about the idea that became the title of this blog post. As I’ve begun diving deeper into the “ungrading/gradeless” sphere of self-assessment, self-grading, and portfolios, I can say that at first I pretty much was getting the former documentation of participation, not the latter evidence of learning. Earlier this year, my student teacher and I spotted students uploading some questionable “learning evidence” into their portfolio, like notebook pictures with the day’s greeting copied from the board during the first five minutes of class.

This is not evidence of learning.

I’d go as far to say it’s a stretch to even call this something like participating. Copying is the absolute lowest writing skill for first year high school language learners, and this 5-minute routine merely sets up actual participation once class really begins. So, that was obviously documentation of some kind (vs. evidence of learning), and we then steered students towards a more productive direction of getting us evidence of learning. However, not everything students uploaded was as obvious. Take, for example, a Read & Summarize statement. Yes, the student was doing something in class, but was that necessarily doing anything for learning? It’s certainly possible, but just as likely not. The point here is that the difference between documentation of participation and evidence of learning really depends on the quality of what students add to their portfolio. If we just treat it as completion, that’s basically what we’ll continue to get: documentation of participation, which can actually lead to disengagement and lack of participation. As much as school can be school, kids really do find meaningless work worthless, and tend to find meaningful learning valuable. Even the cool kids. It’s important in a portfolio system to provide feedback on what students add so that you ensure meaningful learning occurs.

Easier said than done, but it’s time well spent.

As far as I can tell, there are only two ways to determine if what students add to their portfolio is, indeed, evidence of learning (and not documentation of participation). The first is an objective comparison to previous work, whether that’s on the teacher or the student, and the second is an honest rationale from the student’s end (explaining why what was added shows learning). I find the former tricky in a language class. For example, if you were to use the same text and have students keep submitting assignments based on that throughout the grading term, how sure are you that students are even processing the language anymore (vs. based on memorized English understanding of the text)? One cumbersome way could be to use a core set of vocabulary at the start of the term, and then write different texts with that same core set throughout the grading term that students interact with and complete assignments for. That might do the trick, but even then you’ve got to look at the students who ace the assignments in the beginning. How could they possibly show learning if they’ve already…learned…all that from the start? Also, a picture of a Quick Quiz result or something might just be participation, even if the student is showing you they understood all the Latin. Understanding Latin for 10 minutes during one class isn’t necessarily evidence of learning. Again, you’d need to compare those results over time to make the claim.

So, the comparison to previous work is tricky if not just time-consuming. That’s why I prefer getting students to write some honest rationales explaining why what was added shows learning. It’s all going to be individual anyway. Might as well embrace that.

Quizzing For Learning vs. Quizzing To Get A Grade

I was talking to a colleague about an assessment idea I had. The scenario began “if I were a math teacher…,” but really, this idea applies to anyone who gives quizzes. Many teachers I observe who assess like this usually hang out at their desk while students take the quiz. Sometimes it’s timed. Sometimes there are “after the quiz…” instructions on the board. In the literature, this is called an obtrusive assessment, with class on pause, sometimes the entire time.

So, if I were to ever assess like that, instead of hanging out at my desk, I’d start circulating the room, stopping at each student to point out a quiz item they should review (e.g., “Ja’den, spend more time on #3”). And I’d do this the entire time, just walking around, essentially doing all the correcting I would’ve done during my planning period, and even providing some feedback. It’s kind of like a more involved individualized Monitor Assessment. My colleague was wondering how this “real-time rolling assessment” would really show what students know and can do. We talked a bit. Questions were asked like “with so much scaffolding, how do we know the student can do anything on their own?” The truth is, they might not, but how is that any different? In fact, during that whole discussion I forgot to consider what the “real-time rolling assessment” was being compared to. That is, how is a give quiz/collect/correct/hand back procedure any different, really, for finding out what a student knows or can do?

It comes down to process.

Continue reading

Punished By Rewards & Advantages To The Single-Point Rubric

As if researching how to eliminate grading and reduce assessment couldn’t get much better, I’ve now got something else. Alfie Kohn’s 1993 masterpiece really ought to be required reading for every educator. Coming up on its 30 year anniversary, the author at the time reviewed studies dating just as far back to the 1960s. This post is gonna focus on self-assessments. When it comes to students self-assessing, evidence suggests that the more students think about HOW WELL they’re doing (vs. WHAT they’re doing), they do it poorly.

That’s crazy-unintuitive, right?!

Continue reading

Comprehension Establishers & Question Types/Possibilities

I end up learning at least one thing each year from my student teachers, whether it’s some insight while observing, some reflection when we’re planning, or some new activity or strategy they suggest. Here’s a revelation worth looking into…

When scripting out some questions back in October, one example I gave was asking “class, which word means ‘again?’ Is it aliquid or iterum?” After a few more like this, my student teacher said “oh, it’s kind of like a comprehension check acting as a comprehension…establisher.” I paused for a moment, then realized yes, that’s exactly what that is. She put a name to what I’ve been doing for years, going way back to the 2016 sneaky quizzes when I’d use the T/F statements to establish meaning of words.

Comprehension Establishers establish meaning in the form of a question.

The difference in purpose between comprehension checks and establishers is subtle. Establishers aren’t intended to evaluate student understanding. They’re asked in a way that all but guarantees students make a form-meaning connection (e.g., “What word means ‘obscure,’ nocte or obscūra?”). A comprehension check, however, is often exactly that: to check whether a student understands, and if they don’t, then we establish meaning right away. In that sense, can an establisher bypass the check and then establishing meaning? Absolutely, but then there’s variety to consider. Might as well get some experience with both.

Question Types/Possibilities
Also discovered when scripting out some questions, it became clear to me that there are often too many possibilities. Instead of brainstorming every possible one, it’s probably more beneficial to settle on a couple question types and cycle through them while reading. For example, using one sentence, Mārcus ōrdinārius esse nōn vult, we could ask each of the following:

Contrary-To-Fact Personalized Q: vellēsne esse ōrdinārius?
Comprehension Establisher Q: Which word means “to be,” esse or vult?
Comprehension Checks: What does esse mean?
Content Q: What does Marcus not want?

But should we ask that many questions for one sentence? If so, should we ask all four questions for EVERY sentence in the chapter? I’m thinking “no,” and “no.” While on the one hand it would appear to provide the student with a great deal of support, on the other hand this process would drag out quite a bit. My recommendation would be to ask just ONE of those question types PER sentence and see how it feels. You might find that even one of those questions per sentence ends up being too many while reading. If so, scale it back to a question per section of two-three sentences, and then just cycle through the four question types. For example, if a short chapter has eight sections of sentences, you’ll ask a comprehension establisher q, a comprehension check, a contrary-to-fact personalized q, a content q, and then repeat. My advice is to identify the contrary-to-fact personalized q’s first, since it doesn’t always make sense to ask those. Then, fill in the rest. Print these out, and stick them in the book you’re reading. Remember, unused scripts already served a purpose: to get you thinking of how and what to ask students.

Start Here

The most useful professional development (PD) I’ve had over these past 10 years in education has been from presentations, workshops, and blogs that have given me a “start here.” It’s usually in the form of someone figuring out a really effective way to do something, then putting it into some kind of ready-to-go format, whether that’s a packaged method, or list of steps. The “start here” works because it’s the culmination of trial, error, and revision. The “start here” works because it represents the essential. When I’ve used someone else’s “start here,” it’s been really effective. Naturally, there’s adaptation and I’ve been able to put my own spin on things, but only after I’ve implemented whatever was presented to me. So what’s the problem?

Some teachers begin to change the “start here” right away.

For example, if I share a cocktail recipe with you called “The Lance Drink,” and upon seeing .25oz Sfumato in the ingredients you decide to just leave it out, you haven’t actually made The Lance Drink. You’ve certainly made a cocktail. It’s close, but something else. You’ve mixed together ingredients of which the outcome is unknown…and there’s a good chance it might not turn out very good. Let’s say you love vodka. It’s in every cocktail you make, no matter what. When I give you my recipe, you sub vodka for The Lance Drink’s rye base. Why? That’s what you’ve always used. It’s what you’ve always done. So you mix…you sip…but you immediately spit it out because vodka is a horrible combination with the other ingredients. You might even say “gee, this Lance Drink isn’t so great.”

Teaching is a bit like that.

Instead of going with something tried and true, teachers tend to hold onto stuff that just doesn’t mix, not giving the “start here” a real chance. Sometimes, they might go as far as to claim that the “start here” doesn’t work (or whatever), mischaracterizing whatever was presented to them. In the worst of cases, other teachers that never got the original “start here” just listen to the ones who changed something right away, and shun the changed version before they can try the original, effective one.

The next steps—for anyone who works with these teachers—become searching for how to reconcile old principles in the changed version with new ones that the original “start here” was based on. Sometimes there’s no solution. The principles are too conflicting. Sad. Yet it all could’ve been avoided by just taking the “start here” and rolling with it. I’ve actually heard back from teachers who’ve experienced both, mostly when it comes to grading practices. Instead of rolling with the “start here,” they tried some weird combo, thought things didn’t work, then gave up only to revert to old ways. Then, sometime later, they gave things another try—exactly how it was presented—and come to find out they’re all of a sudden embracing the change. Again, it all could’ve been avoided.

So, in sticking with the metaphor, what’s your vodka? Let go of that, and why not give rye a try next time?

Grades: Going, Going, Gone!

Here’s a quick report having gone nearly 100% gradeless. I say nearly because at my school, the halfway point of the quarter (i.e., progress reports) requires a grade. So, as of right now there’s a course grade that shows up. This practice isn’t quite in line with a true ungrading approach that would have a grade only at the very end of the grading period. I’m nearly there, and have a feeling this is as far as I’ll go, too. But that’s not a problem. There’s already been a big difference in the most important areas, and I expect things to get even better.

Continue reading

SBG: On Point With Assessment, Behind The Times With Grading

But first…earlier this week, I shared a recent post on using portfolios to grade equitably, and some dude characterized me as a cowardly idealistic privileged and overeducated white savior who claims to have some solution to problems that minorities face. That’s a lot to unpack, and I’ll leave most of it alone. It’s true that I’m a college-educated white man, placing me in one of the highest privileged boxes possible. No one, though, is claiming to solve society’s inequity with a handful of grading practices in school. Perhaps more importantly, though, it’s downright naïve to think that teachers have no influence and suggest that they can’t do something about a broken system. Grading is a systemic problem, it’s broken, and we’ve known about that for over 100 years (Rugg, 1918). Many teachers should feel empowered to do something about it in the space they have control over: their classrooms (and possibly school).

I now just feel sad for that dude of so many words who wrote such uncalled-for ad hominems. I hope he finds a way to deal with whatever pain he’s going through. I’m gonna stick to using this admittedly privileged platform to share what I’ve been reading and learning about with a just-as-admittedly privileged background in education and a current Ph.D. pursuit. Hope you get as much out of it all as I have, and can use it to enact change wherever possible…

Standards, Assessment, Grading
We’ve been hearing about standards-based grading (SBG) for decades. It’s a massive improvement from whatever was going on in most classrooms prior to the 90s. Thing is, though, some educators have already moved beyond SBG in terms of grading. Ironically, standards-based grading is no longer the best option for grading! But that doesn’t mean it’s useless. We’ll get to that.

What’s been replacing SBG, though? It’s known as “ungrading.” But even in an ungraded system, teachers are still assessing. Assessments might not look what you’d typically expect. Or, they’re pretty much the same just with no points. Regardless, they’re certainly part of instruction as teachers and students focus more on learning content (and not points, scores, or grades). And a big part of that is standards.

Focusing on content probably involves standards in every case, even if a teacher doesn’t formally have a system of standards. That is, whatever the teacher expects of students, and whatever it takes to learn the content, could be and probably already is expressed as a standard, somewhere. Standards are a good way to organize learning. Within this framework, then, standards have a big role to play, just not in grading…

Continue reading