Survey Says…Kids Like Self-Assessment! (et cētera)

Considering how impersonal the year felt, the responses from this end-of-year survey support an early prediction many of us had that learning and growth/development would take place this year after all, though certainly different from what we’ve expected in the past. To be clear, “learning loss” is a myth, and you should stop anyone trying to talk about that dead in their tracks. You simply cannot lose what you never had in the first place. It was a talking point used to get kids into schools ASAP, and nothing more. If students, or even just their learning were truly the priority, the conversation would be about improving living conditions for families at the societal level, as well as fully-funding our public schools.

Anyway, let’s start with the first question on my mind: grading. I’ve settled on the system after experience with a LOT of different ones, but what about students? The open-ended responses explaining what kind of grading students preferred are quite genuine. Scroll through the slideshow to see:

Continue reading

Core Practices

I got thinking about what I’d say my core practices were if anyone wanted to learn more about CI and get an overview of what comprehension-based and communicative language teaching (CCLT) looks like. Would it be a list of 10? Could I get that down to five? Might it be better to prioritize some practices like the top 5, 8, and 16 verbs (i.e. quaint quīnque, awesome octō, and sweet sēdecim)? Would I go specific, with concrete activities? Or, would I go broad and global, starting with principles and ideas?

I highly recommend that you do this just as an exercise during a planning period this week, making a quick list of your core practices. Doing so required me to sort out a few things in the process, and helped organize and align my practices to certain principles. Of course, terms and definitions can get tricky, here. I just saw that Reed Riggs and Diane Neubauer refer to “instructional activities (IA),” which covers a lot of what goes on in the classroom. It’s a good term. I’m using “practices” in a similar way to refer to many different methods, strategies, techniques, and activities that all fall under a CCLT approach, as well as general “teacher stuff” I find to be core as well.

Another reason for this post is that I’ve seen the “CI umbrella” graphic shared before, but that doesn’t quite fit with my understanding of things. Rather than practices falling under a CI umbrella, I envision CI instead as the result of practices under the umbrella of CCLT. I also consider such an approach a defense against incomprehensibility—the first obstacle that needs to be removed—and I thought a more aggressive graphic of a “CI shield” might best represent that.

Here’s the first line of core practice defense:

Continue reading

100% Coverage ≠ 100% Comprehension

A question by a member of the Latin Best Practices FB group prompted me to look into text coverage, which ultimately led me to comprehension. These are two ideas that a lot of people have misinterpreted, much like the “4%er” figure, and even “90% target language use.” I’m thinking people have a hard time with mathematical concepts, and maybe we should avoid percentages moving forward. But first, we should take care of what damage has already been done by looking at simple examples right away:

Text Coverage
Text coverage is measured by tokens. There are five tokens in the sentence “the bird sees the cat.” Two of the tokens in that sentence happen to be the same word. Therefore, “the” represents 40% text coverage. If the reader doesn’t know “the,” they have a text coverage of 60%. The reader who knows everything except “cat” would have a text coverage of 80%.

Comprehension
Comprehension is a different idea entirely. If the reader who doesn’t know “cat” were asked “what does the bird see?” and it were scored, they’d have a comprehension score of zero. If they were asked two questions about the bird, and two questions about the cat, their score would be 50% comprehension with their 80% coverage of the text. Not the same thing.

Reading
Laufer et al.’s research shows that learners need a text coverage—not comprehension—of 98% ideally to read with ease (and 99-100% whenever possible), but that’s just getting through the reading. That 98% figure is just the start of comprehension.

Hold up.

Yeah, that’s right. Knowing 98% of a text—STOP!!—Remember the first section on tokens. It’s not 98 out of 100 different words, but 98 of 100 tokens (i.e. some words probably repeat). So, knowing 98% of a text doesn’t even guarantee comprehension of what is read. That’s quite the trip, isn’t it? It gets worse when we look at some findings from one of Eric Herman’s Acquisition Classroom Memos on exactly how [in]comprehensible reading can get with what seems like decent text coverage.

There’s a lot in that chart, but compare the text coverage to comprehension scores. Even 95% text coverage can get woefully low comprehension (55%). Keep in mind that the higher scores are still in the “most” range, as in learners are understanding most of what they read when they know 95%+ of a text. Also, those vocabulary sizes are incredibly high for what the majority of K-12 teachers should expect from their students. Eric also adds some context to the research:

“For the most part, the above reading studies were done with high proficiency students, ungraded and academic texts, and count word families. A reasonable prediction is that even higher text coverage and vocabulary size numbers are required to enable adequate comprehension of graded texts by lower level proficiency students. And this is not considering levels necessary for a confident and pleasurable reading experience, which would undoubtedly be even higher!

Higher would be 100%. Let’s make sure we set the record straight:

  • Students need to know 98% of a text to read it with ease.
  • Reading with ease from knowing 98% of a text can still result in much lower comprehension scores, like 70%.
  • Coverage ≠ comprehension

Providing students with texts of 98%…even 100% coverage of known words is step zero. It’s actually the minimum hope we could have for students reading with ease with high levels of comprehension. It turns out that text coverage isn’t very important to look at, because even knowing 100% of the words doesn’t guarantee 100% comprehension. It all goes back to vocab as top priority, sheltering whenever possible so gradual exposure to new words increases vocabulary without the burden of incomprehension. What does this mean for class? Probably using even fewer words than you think! Students can’t magically learn thousands of words, so if we expect them to comprehend high levels of what they read—especially during any kind of independent reading—we must use and create texts with a very limited number of words.

The Problem Is Vocab, Not Grammar

This post is not about teaching grammar. This post is about its role in comprehension. Grammar can tell you a word’s function, but what impact does that have if you’re struggling to understand what words mean?! It’s still all about words. In fact, all words contain grammar. If you know what a word means, you’re a little bit closer to acquiring its grammar each time you encounter it. In this post, I use a language I’ve made up for other demonstrations, aptly dubbed Piantagginish, to show how vocab—not grammar—is the real problem regarding comprehension. The pedagogical takeaway is to avoid vocab overload, and shelter vocab whenever possible…

Continue reading

Vocab Lists: Sheltering, Grammar Audit, and Creativity

**Updated 8.19.20 – The DCC core list of top 1000 Latin words has just 100 cognates.**

sīgna zōdiaca Vol. 1 was published at the end of July, bringing the total vocabulary found throughout the entire Pisoverse novellas to 737 unique words, of which 316 are found on the DCC core list, and of which 319 cognates (see my last post on cognates), including 52 found on the DCC core list (i.e. Pisoverse cognates account for over 50% of the total DCC cognates). That vocabulary size is quite low for what is now almost 50,000 total words of Latin for the beginner found in 19 books. This is what is meant by sheltering (i.e. limiting) vocabulary. Of course, that sheltering didn’t just happen by chance. There have been many decisions of what to keep and what to let go, the process deliberate, and at times methodical. In this post, I share ways to shelter vocab in novellas, and how those same practical steps apply to more informal writing done in the classroom with students…

Continue reading

Vocab Overload

This is the time of year when it becomes obvious how much students have not acquired. That is, words not even remotely close to the most frequent of the most frequent are almost completely incomprehensible when they appear in a new text.

That’s OK.

Perhaps you’ve already experienced this earlier in the year. Perhaps it’s coming. Either way, it’s important to recognize that falling back to the old mindset of “but we covered this?!” is *not* going to fly in a comprehension-based and communicative language teaching (CCLT) approach. To clarify: understanding in the moment is CI, and exposure to CI over time results in acquisition. For example, a text so comprehensible that all students can chorally translate it with ease one class might have a handful of topic-specific vocab. Even though there could be an entire class, maybe even an entire week of exposure, topic-specific vocab that isn’t recycled throughout the year has a very low chance of being acquired and comprehended in new texts. **Therefore, students can experience vocab overload even in classes with high levels of CI.** That applies to “big content words,” like all the vocab needed to talk about Roman kings. Now consider function words, like adverbs, conjunctions, particles, etc. that hold very little meaning on their own. Those have almost no chance of being understood unless they keep appearing in texts.

Of course, we cannot recycle all previous words in every new text, which is why acquisition takes so long. Naturally, the least frequent words fall off and out of bounds, and only the most spongiest of memory students have a shot at acquiring those. However, we cannot expect from most students what only few can do. Instead, we must expect will happen when vocab spirals out beyond the possibility of being recycled, and address that before it happens. Here are ways to address vocab overload when providing texts:

  • Dial things back as much as you can, focusing on the top most frequent & useful words.
  • Write a tiered version, or embedded reading for every new text, even if that new text is very short.
  • When possible, use a word more than once, and in different forms. Fewer meanings (e.g. ran, runs, will run, running) have a greater chance of being understood than many meanings focused on a grammar feature (e.g. ran, ate, laughed, said, carried, was able, were).
  • If a function word is important, use it a lot (e.g. the more recent “autem” has no chance of being understood if you keep using “sed”).
  • If a message can be expressed in one very long sentence, break it into two or more shorter ones, restating subjects, etc. for clarity. Then, repeat the full message with a function word (e.g. “therefore,…so…”).
  • When expanding vocabulary with synonyms, especially when beginning with cognates, consider glossing with the previous (e.g. if you began the year with “studēns,” each text that now has “discipula” could have ( = studēns) after the first instance in that text. Continue using “discipula,” but use “studēns” to clarify meaning when needed).

Input Analysis & Textbook Comparison

One universal thing we can discuss with any language teacher is awareness of how much target language we’re giving students (I, Input), how well they understand (C, Comprehensibility), and the reason for doing an activity (P, Purpose). In fact, this focus is central to our school’s Latin department, and keeping track of input is part of my teacher eval goal.

I covered an ELA teacher’s class last Friday, which means the most productive thing to do was complete some kind of menial task. It just so happened that counting up words is exactly that. So, I compared the input my Albāta class students have received to the Latin found in the first four stages of Cambridge. N.B. I chose the Albāta class section because they’ve read the most total words between all class sections (i.e. 1616 to 1755).

Indeed, Albāta students received about 36% more input than Cambridge (1755 to 1117). Surprisingly, though, the unique word count was also higher by about 24% (221 to 169). I wouldn’t have expected that with such an intent on my part to shelter (i.e. limit) vocabulary unlike what is found in textbooks, so let’s take a look…

Continue reading

First Text: A Year To Year Comparison

After the first orientation day of just 12 minute “classes,” I typed up statements using the drawings students did while responding to “what do you like/like to do?” Even though I followed the same plan for the first day as last year, the higher execution of it this year has been…well…crazy.

Last year, each class section read just 50 total words of Latin (10 unique words). This year? There’s 520 total words using 54 unique (17 of which cognates)!!!! Yeah. That’s how much Latin I’ll be able to provide this week after just one very brief meeting, and a decent number of hours writing/typing. Oh, and I’m not keeping track of that kind of work at this point in the school year, doing what I need to do to start off in a calm and confident manner, putting in any extra time beyond the school day I need.

So, how does this year end up including SOOOOO much more input?! First of all, I made sure every 9th grade student was included in the text, clearing the time needed to write about them. Otherwise, I updated a few things. This post looks at those changes…

sample of 2018-19 first text
sample of this year’s first text

The differences you can probably see between the two comparison pics are the following…

Continue reading

Sheltering Vocab & Unsheltering Grammar: 2018-19 Stats

I’ve had a lot of prep time for a couple years now. How?! Not because of my teaching schedules, but because I constantly streamline practices to ensure I can actually complete my work during the workday. Most of this time is spent typing up class texts for students, as well as researching teaching practices online. Last week, however, I spent waaaaaay too much of that prep time crunching numbers with voyant-tools.org. Here are some insights into the vocab my students were exposed to this year throughout all class texts, and 8 of my novellas (reading over 45,000 total words!). N.B this includes all words read in class except for those appearing in the first 6 capitula of Lingua Latīna Per Sē Illustrāta that we read at the very end of the year. The stats:

  • 550 unique words recycled throughout the year (there were 960 total, but 410 appeared just a handful of times!)
    • 30% came from the first 8 Pisoverse novellas (Rūfus lutulentus through Quīntus et nox horrifica), and not found in class texts.
    • 290 appeared in at least a few forms (i.e. not only 3rd person singular present for verbs, or nominative/accusative for nouns).
  • 2470 different forms of words (grammar!)
    • 45% came from the 8 Pisoverse novellas, not class texts.
Continue reading