I got thinking about what I’d say my core practices were if anyone wanted to learn more about CI and get an overview of what comprehension-based and communicative language teaching (CCLT) looks like. Would it be a list of 10? Could I get that down to five? Might it be better to prioritize some practices like the top 5, 8, and 16 verbs (i.e. quaint quīnque, awesome octō, and sweet sēdecim)? Would I go specific, with concrete activities? Or, would I go broad and global, starting with principles and ideas?
I highly recommend that you do this just as an exercise during a planning period this week, making a quick list of your core practices. Doing so required me to sort out a few things in the process, and helped organize and align my practices to certain principles. Of course, terms and definitions can get tricky, here. I just saw that Reed Riggs and Diane Neubauer refer to “instructional activities (IA),” which covers a lot of what goes on in the classroom. It’s a good term. I’m using “practices” in a similar way to refer to many different methods, strategies, techniques, and activities that all fall under a CCLT approach, as well as general “teacher stuff” I find to be core as well.
Another reason for this post is that I’ve seen the “CI umbrella” graphic shared before, but that doesn’t quite fit with my understanding of things. Rather than practices falling under a CI umbrella, I envision CI instead as the result of practices under the umbrella of CCLT. I also consider such an approach a defense against incomprehensibility—the first obstacle that needs to be removed—and I thought a more aggressive graphic of a “CI shield” might best represent that.
This post is not about teaching grammar. This post is about its role in comprehension. Grammar can tell you a word’s function, but what impact does that have if you’re struggling to understand what words mean?! It’s still all about words. In fact, all words contain grammar. If you know what a word means, you’re a little bit closer to acquiring its grammar each time you encounter it. In this post, I use a language I’ve made up for other demonstrations, aptly dubbed Piantagginish, to show how vocab—not grammar—is the real problem regarding comprehension. The pedagogical takeaway is to avoid vocab overload, and shelter vocab whenever possible…
This September marks the fifth anniversary of the first two Latin novellas written with sheltered (i.e. limited) vocabulary for the language learner by co-authors Rachel Ash & Miriam Patrick, and Bob Patrick. There are now 70. That’s 0 to 70 in five years, and a whopping total figure of over 228,000 words of new Latin! What has the impact been? Let’s take a look…
sīgna zōdiaca Vol. 1 was published at the end of July, bringing the total vocabulary found throughout the entire Pisoverse novellas to 737 unique words, of which 316 are found on the DCC core list, and of which 319 cognates (seemy last post on cognates), including 52 found on the DCC core list (i.e. Pisoverse cognates account for over 50% of the total DCC cognates). That vocabulary size is quite low for what is now almost 50,000 total words of Latin for the beginner found in 19 books. This is what is meant by sheltering (i.e. limiting) vocabulary. Of course, that sheltering didn’t just happen by chance. There have been many decisions of what to keep and what to let go, the process deliberate, and at times methodical. In this post, I share ways to shelter vocab in novellas, and how those same practical steps apply to more informal writing done in the classroom with students…
No, this does not describe a juniper and coriander-based evening. Ginput is Grammar-based Input. Surprise! Yeah, I played this one pretty close to the vest this year. In fact, I began writing this post on June 13th—2019—knowing it would be months until actually implementing and seeing any results from what was last year’s springtime idea.
What’s Ginput? The idea for Ginput came shortly after one of those frequent grammar debates online fizzled out. I still know that teaching grammar isn’t necessary, and I certainly won’t test grammar knowledge, but I also know that even really compelling things get boring throughout the year! I started wondering if grammar had a role to play, if only as a break from all the compelling stuff, especially since I had no plans to test or grade it. However, a question remained: “could grammar somehow be input-heavy?“
The Search for Grammar-based Input Providing CI while teaching grammar is rare, so I began to think…“But what if teaching grammar weren’t the entire syllabus?” and “Could I explore Latin grammar with students knowing that our curriculum is based on their interests (i.e. NOT grammar) under a comprehension-based and communicative language teaching (CCLT) approach?” I was certainly onto something, but needed a resource for guidance. Oh wait, I wrote one…
This is the time of year when it becomes obvious how much students have not acquired. That is, words not even remotely close to the most frequent of the most frequent are almost completely incomprehensible when they appear in a new text.
Perhaps you’ve already experienced this earlier in the year. Perhaps it’s coming. Either way, it’s important to recognize that falling back to the old mindset of “but we covered this?!” is *not* going to fly in a comprehension-based and communicative language teaching (CCLT) approach. To clarify: understanding in the moment is CI, and exposure to CI over time results in acquisition. For example, a text so comprehensible that all students can chorally translate it with ease one class might have a handful of topic-specific vocab. Even though there could be an entire class, maybe even an entire week of exposure, topic-specific vocab that isn’t recycled throughout the year has a very low chance of being acquired and comprehended in new texts. **Therefore, students can experience vocab overload even in classes with high levels of CI.** That applies to “big content words,” like all the vocab needed to talk about Roman kings. Now consider function words, like adverbs, conjunctions, particles, etc. that hold very little meaning on their own. Those have almost no chance of being understood unless they keep appearing in texts.
Of course, we cannot recycle all previous words in every new text, which is why acquisition takes so long. Naturally, the least frequent words fall off and out of bounds, and only the most spongiest of memory students have a shot at acquiring those. However, we cannot expect from most students what only few can do. Instead, we must expect will happen when vocab spirals out beyond the possibility of being recycled, and address that before it happens. Here are ways to address vocab overload when providing texts:
Dial things back as much as you can, focusing on the top most frequent & useful words.
Write a tiered version, or embedded reading for every new text, even if that new text is very short.
When possible, use a word more than once, and in different forms. Fewer meanings (e.g. ran, runs, will run, running) have a greater chance of being understood than many meanings focused on a grammar feature (e.g. ran, ate, laughed, said, carried, was able, were).
If a function word is important, use it a lot (e.g. the more recent “autem” has no chance of being understood if you keep using “sed”).
If a message can be expressed in one very long sentence, break it into two or more shorter ones, restating subjects, etc. for clarity. Then, repeat the full message with a function word (e.g. “therefore,…so…”).
When expanding vocabulary with synonyms, especially when beginning with cognates, consider glossing with the previous (e.g. if you began the year with “studēns,” each text that now has “discipula” could have ( = studēns) after the first instance in that text. Continue using “discipula,” but use “studēns” to clarify meaning when needed).
One universal thing we can discuss with any language teacher is awareness of how much target language we’re giving students (I, Input), how well they understand (C, Comprehensibility), and the reason for doing an activity (P, Purpose). In fact, this focus is central to our school’s Latin department, and keeping track of input is part of my teacher eval goal.
I covered an ELA teacher’s class last Friday, which means the most productive thing to do was complete some kind of menial task. It just so happened that counting up words is exactly that. So, I compared the input my Albāta class students have received to the Latin found in the first four stages of Cambridge. N.B. I chose the Albāta class section because they’ve read the most total words between all class sections (i.e. 1616 to 1755).
Indeed, Albāta students received about 36% more input than Cambridge (1755 to 1117). Surprisingly, though, the unique word count was also higher by about 24% (221 to 169). I wouldn’t have expected that with such an intent on my part to shelter (i.e. limit) vocabulary unlike what is found in textbooks, so let’s take a look…
After the first orientation day of just 12 minute “classes,” I typed up statements using the drawings students did while responding to “what do you like/like to do?” Even though I followed the same plan for the first day as last year, the higher execution of it this year has been…well…crazy.
Last year, each class section read just 50 total words of Latin (10 unique words). This year? There’s 520 total words using 54 unique(17 of which cognates)!!!! Yeah. That’s how much Latin I’ll be able to provide this week after just one very brief meeting, and a decent number of hours writing/typing. Oh, and I’m not keeping track of that kind of work at this point in the school year, doing what I need to do to start off in a calm and confident manner, putting in any extra time beyond the school day I need.
So, how does this year end up including SOOOOO much more input?! First of all, I made sure every 9th grade student was included in the text, clearing the time needed to write about them. Otherwise, I updated a few things. This post looks at those changes…
The differences you can probably see between the two comparison pics are the following…
I’ve had a lot of prep time for a couple years now. How?! Not because of my teaching schedules, but because I constantly streamline practices to ensure I can actually complete my work during the workday. Most of this time is spent typing up class texts for students, as well as researching teaching practices online. Last week, however, I spent waaaaaay too much of that prep time crunching numbers with voyant-tools.org. Here are some insights into the vocab my students were exposed to this year throughout all class texts, and 8 of my novellas (reading over 45,000 total words!). N.B this includes all words read in class except for those appearing in the first 6 capitula of Lingua Latīna Per Sē Illustrāta that we read at the very end of the year. The stats:
550 unique words recycled throughout the year (there were 960 total, but 410 appeared just a handful of times!)
30% came from the first 8 Pisoverse novellas (Rūfus lutulentus through Quīntus et nox horrifica), and not found in class texts.
290 appeared in at least a few forms (i.e. not only 3rd person singular present for verbs, or nominative/accusative for nouns).
2470 different forms of words (grammar!)
45% came from the 8 Pisoverse novellas, not class texts.
I first adopted more realistic expectations of students after understanding how languages are acquired. This was within the first few months of teaching in my first job, so I was lucky; some have never had that opportunity. However, I was still trying to apply what I learned to a textbook program still focused on grammar, so it was a rocky start to any comprehension-based and communicative approach, to say the least. Despite what some might claim, CI and grammar just don’t mix. That is, whenever we decide to teach grammar, even for legit reasons, students are likely not receiving CI.