Comprehension Establishers & Question Types/Possibilities

I end up learning at least one thing each year from my student teachers, whether it’s some insight while observing, some reflection when we’re planning, or some new activity or strategy they suggest. Here’s a revelation worth looking into…

When scripting out some questions back in October, one example I gave was asking “class, which word means ‘again?’ Is it aliquid or iterum?” After a few more like this, my student teacher said “oh, it’s kind of like a comprehension check acting as a comprehension…establisher.” I paused for a moment, then realized yes, that’s exactly what that is. She put a name to what I’ve been doing for years, going way back to the 2016 sneaky quizzes when I’d use the T/F statements to establish meaning of words.

Comprehension Establishers establish meaning in the form of a question.

The difference in purpose between comprehension checks and establishers is subtle. Establishers aren’t intended to evaluate student understanding. They’re asked in a way that all but guarantees students make a form-meaning connection (e.g., “What word means ‘obscure,’ nocte or obscūra?”). A comprehension check, however, is often exactly that: to check whether a student understands, and if they don’t, then we establish meaning right away. In that sense, can an establisher bypass the check and then establishing meaning? Absolutely, but then there’s variety to consider. Might as well get some experience with both.

Question Types/Possibilities
Also discovered when scripting out some questions, it became clear to me that there are often too many possibilities. Instead of brainstorming every possible one, it’s probably more beneficial to settle on a couple question types and cycle through them while reading. For example, using one sentence, Mārcus ōrdinārius esse nōn vult, we could ask each of the following:

Contrary-To-Fact Personalized Q: vellēsne esse ōrdinārius?
Comprehension Establisher Q: Which word means “to be,” esse or vult?
Comprehension Checks: What does esse mean?
Content Q: What does Marcus not want?

But should we ask that many questions for one sentence? If so, should we ask all four questions for EVERY sentence in the chapter? I’m thinking “no,” and “no.” While on the one hand it would appear to provide the student with a great deal of support, on the other hand this process would drag out quite a bit. My recommendation would be to ask just ONE of those question types PER sentence and see how it feels. You might find that even one of those questions per sentence ends up being too many while reading. If so, scale it back to a question per section of two-three sentences, and then just cycle through the four question types. For example, if a short chapter has eight sections of sentences, you’ll ask a comprehension establisher q, a comprehension check, a contrary-to-fact personalized q, a content q, and then repeat. My advice is to identify the contrary-to-fact personalized q’s first, since it doesn’t always make sense to ask those. Then, fill in the rest. Print these out, and stick them in the book you’re reading. Remember, unused scripts already served a purpose: to get you thinking of how and what to ask students.

Abbi Holt’s Wisdom On Sustainable & Equitable Teaching

Abbi Holt had a great thread sharing a 6-year progression of practices that have made even teaching through a pandemic tolerable! Here it is with some commentary on why you should look into doing similar things, if not the very same…

Why This Works? #1
Not only is moving slowly and steadily a more responsive approach to teaching learners in the room, but reducing grading is crucial in carving out space for everything that positively impacts teaching and learning.

Continue reading

Survey Says…Kids Like Self-Assessment! (et cētera)

Considering how impersonal the year felt, the responses from this end-of-year survey support an early prediction many of us had that learning and growth/development would take place this year after all, though certainly different from what we’ve expected in the past. To be clear, “learning loss” is a myth, and you should stop anyone trying to talk about that dead in their tracks. You simply cannot lose what you never had in the first place. It was a talking point used to get kids into schools ASAP, and nothing more. If students, or even just their learning were truly the priority, the conversation would be about improving living conditions for families at the societal level, as well as fully-funding our public schools.

Anyway, let’s start with the first question on my mind: grading. I’ve settled on the system after experience with a LOT of different ones, but what about students? The open-ended responses explaining what kind of grading students preferred are quite genuine. Scroll through the slideshow to see:

Continue reading

TPRS, etc. & Interaction: Required

Here’s a quick note about TPRS (Teaching Proficiency through Reading and Storytelling) and other collaborative storytelling methods and strategies…

They require interaction.

This has become painfully obvious to me after teaching on Zoom for over a year in a public high school, where responses to polls are few, participation is low, and circling is next to impossible in most contexts (unless you happen to have surprisingly high levels of participation). I mean, we can certainly fake circling by doing something similar via those polls and chat, but it moves a LOT slower than that in-person question after question pace complete with reading the room (i.e. “teach to eyes” etc.). On Zoom, the process gets bogged down. That’s not circling. The point of circling is to give students a massive amount of exposure to a small set of words by asking many different questions that students can answer without hesitation. It’s actually the answering of questions that’s so key, not only to keep an eye on who might be getting lost (and then ask “what does X mean?” comprehension check), but also to get the details you ask for, as well as the surprise responses that can take the story in an unexpected yet highly compelling turn. Hence, interaction.

Yet “interaction” can be woefully misunderstood and misinterpreted to mean full-on conversations. That’s not what we need with collaborative storytelling at all. We need to provide students messages in the target language via a process that might feel ad nauseam to us, but is probably just enough (or maybe not quite enough!) for the beginner. That’s happens from questions, statements, and then restating everything that happens.

THAT kind of interaction is crucial. Other types of interaction might occur, or even prove to be beneficial in certain cases, especially in other activities, but without student responses during collaborative storytelling—not just the ones that get details—we got nothing.

Core Practices

I got thinking about what I’d say my core practices were if anyone wanted to learn more about CI and get an overview of what comprehension-based and communicative language teaching (CCLT) looks like. Would it be a list of 10? Could I get that down to five? Might it be better to prioritize some practices like the top 5, 8, and 16 verbs (i.e. quaint quīnque, awesome octō, and sweet sēdecim)? Would I go specific, with concrete activities? Or, would I go broad and global, starting with principles and ideas?

I highly recommend that you do this just as an exercise during a planning period this week, making a quick list of your core practices. Doing so required me to sort out a few things in the process, and helped organize and align my practices to certain principles. Of course, terms and definitions can get tricky, here. I just saw that Reed Riggs and Diane Neubauer refer to “instructional activities (IA),” which covers a lot of what goes on in the classroom. It’s a good term. I’m using “practices” in a similar way to refer to many different methods, strategies, techniques, and activities that all fall under a CCLT approach, as well as general “teacher stuff” I find to be core as well.

Another reason for this post is that I’ve seen the “CI umbrella” graphic shared before, but that doesn’t quite fit with my understanding of things. Rather than practices falling under a CI umbrella, I envision CI instead as the result of practices under the umbrella of CCLT. I also consider such an approach a defense against incomprehensibility—the first obstacle that needs to be removed—and I thought a more aggressive graphic of a “CI shield” might best represent that.

Here’s the first line of core practice defense:

Continue reading

Comprehension Checks as MGMT

Classroom Management is paramount. Without it, none of the strategies to provide students with CI stand a chance. They don’t stand a chance because students who aren’t paying attention aren’t receiving any input (I) at all, let alone input that’s comprehensible (C)! Of aaaaaaaall the systems in place to manage the classroom, though, comprehension checks are probably the most effective, yet most overlooked…

Continue reading

NTPRS 2017 Takeaways

Before having the opportunity to present a couple workshops, my mind was blown quite sufficiently during the week. Overall, the Advanced Track with Alina Filipescu and Jason Fritze got me thinking about aaaaaaaall the things I’ve forgotten to do, or stopped doing (for no good reason) over the years. Thankfully, most of them are going to be soooooo easy to [re]implement. As for the others, I’ll pick 2 at a time to add—not replace—until they become automatic. This will probably take the entire year; there’s no rush!

Jason referred to high-leverage strategies—those yielding amazing results with minimal effort (i.e. juice vs. squeeze), and I’m grateful that he called our attention to everything Alina was doing while teaching us Romanian. ce excelent! I’ll indicate some high-leverage strategies, and will go as far as to classify them as “non-negotiable” for my own teaching, using the letters “NN.” I’ll also indicate strategies to update or re-implement with the word “Update!” and those I’d like to try for the first time with the word “New!” I encourage you to give them all a try. Here are the takeaways organized by presenter:

Continue reading

Stultus: Crowd Control

“Stultus” is not a word I want thrown around class. Sure, it’s National Bullying Prevention Month, but the fundamental reason why I can’t have students yelling at me when I make a mistake or error as part of the comprehension activity is because mistakes and errors are welcome in my class. I would be sending the wrong message, however gratifying and novel it might seem to call the teacher “stupid,” if I allowed that in my room. N.B. I must emphasize MY room, because I know that Stultus works out just fine elsewhere.

So, my adaptation has been to rename the activity “Magister Mendāx!” The process is the same, but the results are a bit more suited to me and my students. When I say something that’s simply not true (e.g. “Trump reads a lot” when the Latin reads “Trump nōn legit”), the students yell out “liar!” I like that the adaptation is not a judgement of my ability, I don’t have to pretend to not understand, and they still get a fun word we can use in class and in stories.

Why “do you understand?” is pointless to ask…

Language teachers usually ask this when something indicates that a student didn’t understand (e.g. verbal response “huh?” or non-verbal response deer-in-headlights expression on face, etc.). If this event has already happened, asking the question serves no purpose. In fact, it might even make the matter worse by putting the student on the spot. The student will likely answer “yes, I understand” just to get their teacher to move on to someone else. Here are some comprehension check alternatives:

1) Did I just say/ask X?
2) Did I just say/ask X or Y?
3) I just said/asked ____.
4) What did I just say/ask? -or- Who can tell me what I just said/asked?

The alternatives above are arranged by questioning level from low to high (i.e. yes/no, either/or, fill-in-blank, open-ended). The questions could certainly be asked in the target language, but one popular strategy is to ask, in English/native, “what did I just say/ask?” to a so-called barometer student, who would be one with the slowest processing speed. This popular strategy is interesting because that kind of question is technically harder to answer than “did I just say/ask X?” It’s probably a non-issue because we’re dealing with the native language, but for the sake of variety, or if you find that your barometer students are struggling, you could start asking those lower level comprehension checks in English/native as well the classic “what did I just say/ask?”