Vocab Lists: Sheltering, Grammar Audit, and Creativity

**Updated 8.19.20 – The DCC core list of top 1000 Latin words has just 100 cognates.**

sīgna zōdiaca Vol. 1 was published at the end of July, bringing the total vocabulary found throughout the entire Pisoverse novellas to 737 unique words, of which 316 are found on the DCC core list, and of which 319 cognates (see my last post on cognates), including 52 found on the DCC core list (i.e. Pisoverse cognates account for over 50% of the total DCC cognates). That vocabulary size is quite low for what is now almost 50,000 total words of Latin for the beginner found in 19 books. This is what is meant by sheltering (i.e. limiting) vocabulary. Of course, that sheltering didn’t just happen by chance. There have been many decisions of what to keep and what to let go, the process deliberate, and at times methodical. In this post, I share ways to shelter vocab in novellas, and how those same practical steps apply to more informal writing done in the classroom with students…

Continue reading

A Glossary Isn’t Enough & Replacing Comprehension Qs with Reflection Qs

After looking at various first day/week/month materials for the beginning language learner, I was reminded that most resources include texts with way too many words, way too soon. A full glossary certainly helps, but isn’t enough. Words need to be recycled often—especially in the beginning—to have a chance of being acquired by all learners (not just the ones with an excellent memory). If your text doesn’t recycle its vocab, you should adapt it. Remember, for a text to be truly readable (i.e. without starting to become laborious), students must understand 98 words in a text with 100 different ones, 49 words in a text with 50, and pretty much every word in a text of 25 (Hsueh-Chao & Nation, 2000).

A full glossary is as close to cueing as we can get asynchronously, but we won’t know how students are using it. As part of evidence of engagement when reading a text, these Google Form reflection questions could shed light on that:

ex.

  1. How often did you look up meaning of words?
    – Hella
    – A lot
    – Sometimes
    – Not very much
  2. What was your experience of looking up words?
    – No problem at all
    (i.e. it helped you read, or you didn’t mind looking up words)
    – It was OK
    (i.e. a little annoying looking up words, but not too bad)
    – It started getting hard to read
    (i.e. looking up words started feeling like “work”)
    – I kept looking at almost every word, so “reading” was really hard to do
    (i.e. this was a bad reading experience)
  3. Would you like Mr P to give you easier & shorter texts to read?
    – Yes
    – No

The first two tell us a student’s threshold for “noise” (i.e. how much incomprehension in the input they can handle), but the last question is going to be extra important. If a lot of students opt for “yes,” we can put effort into making easier texts for all (e.g. an additional simplified tier). Alternatively, we could reach out to a few individuals with support.

Support vs. Individualized Feedback
I wrote about how individualized feedback, especially when required, is largely a waste of time. Don’t confuse providing additional input to a student with giving an individualized feedback for its own sake, about something that student completed (but doesn’t need any reply), or worse, on correct/incorrect responses. That kind of individualized feedback isn’t worth our time, and not even effective, pedagogically. When any reflection Q responses indicate comprehension, we don’t need comprehension Qs.

In fact, rather than spending time any time at all writing comprehension Qs, use data from the reflection Qs and spend time writing more comprehensible texts! That is, inasmuch as comprehension Qs are a student’s word on homework (i.e. remote learning), so are the reflections. It’s much more valuable to get a sense of how often a student is referring (i.e. signs of incomprehension) rather than percentage of X correct out of Y. Students are also more likely to report how often they used the glossary more accurately, which itself is all the comprehension data we need.

First Text: A Year To Year Comparison

After the first orientation day of just 12 minute “classes,” I typed up statements using the drawings students did while responding to “what do you like/like to do?” Even though I followed the same plan for the first day as last year, the higher execution of it this year has been…well…crazy.

Last year, each class section read just 50 total words of Latin (10 unique words). This year? There’s 520 total words using 54 unique (17 of which cognates)!!!! Yeah. That’s how much Latin I’ll be able to provide this week after just one very brief meeting, and a decent number of hours writing/typing. Oh, and I’m not keeping track of that kind of work at this point in the school year, doing what I need to do to start off in a calm and confident manner, putting in any extra time beyond the school day I need.

So, how does this year end up including SOOOOO much more input?! First of all, I made sure every 9th grade student was included in the text, clearing the time needed to write about them. Otherwise, I updated a few things. This post looks at those changes…

sample of 2018-19 first text
sample of this year’s first text

The differences you can probably see between the two comparison pics are the following…

Continue reading