Weekly Work & Automatic Grades

Anyone who’s looked at a cluttered gradebook at the end of the term knows the feeling of “gee, I guess we didn’t need to do all that.” The gradebook should contain evidence of learning to show growth, and result in a course grade. We really only need 10-15 pieces of evidence per quarter to do that. That is, 40-60 for the whole year is plenty. Here’s how to get evidence of what students have been doing, as well as a weekly score for each student with a process that’s completely managed by students themselves!

Continue reading

Vocab Lists: Sheltering, Grammar Audit, and Creativity

**Updated 8.19.20 – The DCC core list of top 1000 Latin words has just 100 cognates.**

sīgna zōdiaca Vol. 1 was published at the end of July, bringing the total vocabulary found throughout the entire Pisoverse novellas to 737 unique words, of which 316 are found on the DCC core list, and of which 319 cognates (see my last post on cognates), including 52 found on the DCC core list (i.e. Pisoverse cognates account for over 50% of the total DCC cognates). That vocabulary size is quite low for what is now almost 50,000 total words of Latin for the beginner found in 19 books. This is what is meant by sheltering (i.e. limiting) vocabulary. Of course, that sheltering didn’t just happen by chance. There have been many decisions of what to keep and what to let go, the process deliberate, and at times methodical. In this post, I share ways to shelter vocab in novellas, and how those same practical steps apply to more informal writing done in the classroom with students…

Continue reading

A Glossary Isn’t Enough & Replacing Comprehension Qs with Reflection Qs

After looking at various first day/week/month materials for the beginning language learner, I was reminded that most resources include texts with way too many words, way too soon. A full glossary certainly helps, but isn’t enough. Words need to be recycled often—especially in the beginning—to have a chance of being acquired by all learners (not just the ones with an excellent memory). If your text doesn’t recycle its vocab, you should adapt it. Remember, for a text to be truly readable (i.e. without starting to become laborious), students must understand 98 words in a text with 100 different ones, 49 words in a text with 50, and pretty much every word in a text of 25 (Hsueh-Chao & Nation, 2000).

A full glossary is as close to cueing as we can get asynchronously, but we won’t know how students are using it. As part of evidence of engagement when reading a text, these Google Form reflection questions could shed light on that:

ex.

  1. How often did you look up meaning of words?
    – Hella
    – A lot
    – Sometimes
    – Not very much
  2. What was your experience of looking up words?
    – No problem at all
    (i.e. it helped you read, or you didn’t mind looking up words)
    – It was OK
    (i.e. a little annoying looking up words, but not too bad)
    – It started getting hard to read
    (i.e. looking up words started feeling like “work”)
    – I kept looking at almost every word, so “reading” was really hard to do
    (i.e. this was a bad reading experience)
  3. Would you like Mr P to give you easier & shorter texts to read?
    – Yes
    – No

The first two tell us a student’s threshold for “noise” (i.e. how much incomprehension in the input they can handle), but the last question is going to be extra important. If a lot of students opt for “yes,” we can put effort into making easier texts for all (e.g. an additional simplified tier). Alternatively, we could reach out to a few individuals with support.

Support vs. Individualized Feedback
I wrote about how individualized feedback, especially when required, is largely a waste of time. Don’t confuse providing additional input to a student with giving an individualized feedback for its own sake, about something that student completed (but doesn’t need any reply), or worse, on correct/incorrect responses. That kind of individualized feedback isn’t worth our time, and not even effective, pedagogically. When any reflection Q responses indicate comprehension, we don’t need comprehension Qs.

In fact, rather than spending time any time at all writing comprehension Qs, use data from the reflection Qs and spend time writing more comprehensible texts! That is, inasmuch as comprehension Qs are a student’s word on homework (i.e. remote learning), so are the reflections. It’s much more valuable to get a sense of how often a student is referring (i.e. signs of incomprehension) rather than percentage of X correct out of Y. Students are also more likely to report how often they used the glossary more accurately, which itself is all the comprehension data we need.

If An Hour Doesn’t Get Us One to Two Classes…

…we’re doing something wrong.

If we spend an hour preparing to teach, that hour should at least result in an entire class’ worth of content, activities, etc., and bonus if it gets us a couple more. In other words, the fruit of an hour’s labor should not result in a single activity lasting just 10-15 minutes, or a quiz that lasts the same time but adds another hour for us to check/enter in gradebook/follow up with. Even spending an hour on something that lasts half as much time in the classroom—physical, virtual, live, or asynchronous—isn’t enough juice for the squeeze, and we got alotta lemons this year…

Continue reading

Meaning: Establishing & Cueing

I’ve written about establishing meaning not once, not twice, but thrice before today. It is perhaps the most fundamental equitable practice a language teacher can use to provide input. There really is no discussion here—a student must understand the input (CI). That’s step zero. So, the teacher must tell students what words mean! The only discussion lies in how teachers establish meaning. This discussion doesn’t have to be complicated, either, yet it has turned into a debate that keeps cycling ’round and ’round. At the heart of the debate you’ll find two perspectives on how to establish meaning…

Continue reading

Meaning-Based & Form-Based Latin Teaching: Survey Results

In a report on the 2018 National Latin Exam Survey, the number of teachers primarily using grammar-translation (478) was over twice that of the next most-used “Reading Method” option (202), and over 16 times that of the least-used “Active Latin” option (27). The other options given were CI, and TPRS. You might already see the problem there. That is all those options were labeled as “methodologies/techniques/philosophies” on the survey, likely in an attempt to account for all the differences between terms. However, such a comparison is like asking “what do you primarily do in class?” That is, there’s almost no coherency between the five options. For example, a teacher could use the TPRS method to provide CI, and in doing so be characterized as using Active Latin. The only clearly distinct option is grammar-translation, yet that still doesn’t show the extent to which grammar is present in one’s teaching (i.e. grammar is also included in “Reading Method” and possibly “Active Latin”). Therefore, I wanted to send out a new survey that focused on practices rather than terms prone to misunderstanding. I did just that in June of 2020. In this post, I share those results…

Continue reading

CI, Equity, User-Error & Inequitable Practices

I don’t agree that the statement “CI is equitable” is harmful. Yet, I also don’t think the message behind “CI isn’t inherently equitable” is wrong, either. John Bracey said one can still “do racist stuff” while teaching with CI principles. Of course, we both know that’s an issue with content, not CI. Still, I get the idea behind that word “inherent.” In case you missed the Twitter hub bub, let me fill you in: People disagree with a claim that CI is “inherently equitable,” worried that such a message would lead teachers to say “well, I’m providing CI, so I guess I’m done.” I don’t think anyone’s actually saying that, but still, I understand that position to take.

Specifically, the word “inherent” seems to be the main issue. I can see how that could be seen as taking responsibility away from the teacher who should be actively balancing inequity and dismantling systemic racism. However, teachers haven’t been as disengaged from that equity work as the worry suggests. I’ve been hearing “CI levels the playing field” many times over the years from teachers reporting positive changes to their program’s demographics. What else could that mean if not equity? But OK, I get it. If “inherent” is the issue, maybe “CI is more-equitable” will do. If so, though, at what point does a teacher go from having a “more-equitable” classroom to an “equitable” one? And is there ever a “fully-equitable” classroom? I’m thinking no. So, if CI is central to equity—because you cannot do the work of bringing equity into the classroom if students aren’t understanding (i.e. step zero), and nothing has shown to be more equitable than CI, well then…

For fun, though, I’ll throw in a third perspective. Whereas you have “CI is equitable” and “nothing makes CI equitable per se,” how about “CI is the only equitable factor?” I’m sure that sounds nuts, but here it goes: Since CI is independent from all the content, methods, strategies, etc. that teachers choose, as a necessary ingredient for language acquisition, CI might be the only non-biased factor in the classroom. Trippy.

I don’t think that third perspective is really worth pursuing, though, so let’s get back to the main points. Again, I understand the message behind “CI isn’t inherently equitable” as a response to “CI is equitable.” However, I suspect the latter is said by a lot of people who aren’t actually referring to CI. Don’t get me wrong; some get it, and are definitely referring to how CI principles reshaped their language program to mirror demographics of the school. However, others are merely referring to practices they think is “CI teaching.” This will be addressed later with the Dunning-Kruger Effect. Otherwise, let’s talk equity…

Continue reading