I spent about 15min entering data from the diēs Mārtis (i.e. Tuesday) Latin class K-F-D Quizzes. N.B. These are “sneaky quizzes” per my NTPRS 2017 presentation, No Prep Grading & Assessment, referring to “assessments” that satisfy most quizzing/testing requirements, yet are actually an opportunity to interact and acquire.
28 students were in class for the K-F-D Quiz. Here are some observations:
- All students knew minimē!
- 26 students recognized discipulus, one of the top 3 “known” words.
- Only 3 students recognized et, and they didn’t know what it meant.
- Only 2 students knew sunt.
The figure on minimē reflects how students have been able to show their preference in class, as well as how I’m probably providing multiple exposures of certain phrases by asking “no” questions, confirming the statement does not happen/isn’t true, and then stating what is.
However, sunt was surprisingly low in terms of “known” vocabulary, and et usually accompanies sunt in class (i.e. “Student X, and Student Y like football”). Surely, more students understand et, right? Well, it turns out that et occurred just 4 times in the text I read aloud, and it was always surrounded by other, more important unknown “big content words.” If I were to require students to put each and every word somewhere on their chart, that would certainly explain away the low number of students who bothered to write down et, as well as a few others, but that’s not the point of a K-F-D Quiz. Another observation:
- Mārtis was the only well-known proper noun.
OK, this is something I can explicitly teach. In my class, I will only capitalize proper nouns (an orthographic practice I highly recommend since it can give context for students aware of it). Aside from a general deficiency in conventional capitalization practices amongst adolescents—I blame smartphones, and auto-capitalization—I can teach them reading strategies that will help make Latin more comprehensible. One is to not dwell on someone’s name, or a place-name. I don’t blame them for not recognizing other names, because “Nate” appears to be a Latin word if you ignore the capitalization, and in fact, is. Of course, some proper nouns do have their own meaning, but the point is that they didn’t notice the capitalization at all! I had to point out all the names in the text during the last step of the K-F-D Quiz in class of establishing meaning of all Forget and Don’t Know words. From now on, I can remind them by saying “est nōmen.” Another observation:
- litterae, a new word, was not recognized as a cognate.
This is why I include all cognates in the word counts of my novellas, and consistently use Mike Peto’s “béisbol” routine. Let’s be honest and recognize that sometimes, what seems obvious to us just isn’t obvious to students. littera is listed as a potential false cognate, “litter,” on the list of Super Clear Cognates, which is exactly what some students in class interpreted it as. More observations:
- magna and parva were almost completely unknown.
- sed and iam are not sticking very well.
magna and parva have been mentioned only 1x in this particular class. The “known” and “forgot” figures represent my fastest and faster processors. Those are the sponges that have soaked up all vocabulary used, regardless of frequency, while the slowest of processors know just a handful of words really well, here in our 7th Latin class of the year (i.e. Latin 1x/week). sed and iam, despite being used on a daily basis, continue to be forgotten, or just outright unknown. A former grammar-translation-teaching version of myself would be outraged, and a new-to-CI version of myself would misinterpret this data and go in next week targeting sed and iam, and circle statement after statement in hopes of having it “acquired” by the end of class. It doesn’t work that way, folks! My current self recognizes that “but” and “now” don’t really hold a lot of meaning, at least not for the students processing other words, first. Students are looking for those other “big content words” that get them the most meaning. I predict that these two words would only hold more meaning in a contrasting statement with different tenses (e.g. “she used to hate, but now she loves”), yet even then it might not become central to the meaning. All I know for sure is that students will become more familiar with them with each exposure, but that might not ever become fully acquired.
Instructional Adjustments
I won’t circle forgotten and unknown words ad nauseam.
What I will do, however, is try to remember to write sed and iam on the board when I use them naturally, knowing that some students don’t understand it but just too much cognitive demand is going towards understanding everything else, in which case asking me to clarify those two words falls through the cracks.
I’ve already written magnus and parvus on our word wall (no, they won’t notice that we used the feminine form in the text) nested under the “Quālis? What kind of?” question poster. I’ll start using the two words more often now in questions, even as “shadows” (i.e. non-options in either/or) if they don’t apply to what I’m asking, and will also refrain from adding other adjectives for a week or two in order to maximize exposure. This is a form of targeting, but if it feels contrived, I know that those two words will come up again and again naturally, and I won’t have to force them. In other words, I’ll go into class with intent to expose students to more instances of magnus and parvus, but if more appropriate words are needed to express ideas in a genuine way, I have no problem ditching that intent.
I could also make a deliberate effort to compare students more, seeing as sunt was quite low in terms of “known” vocab. To be honest, I’m not sure if this is something I haven’t been doing, but now it’s on my radar, and I’ll be able to check up on that.
Really cool work, Lance. Relatedly, I know of one study that looked at transcriptions of immigrants interacting with native speakers in England. The researcher used “concreteness” ratings from another study, where a long list of English words (40,000 total) were accompanied by a rating between 1-5 to represent how concrete the meaning was to native speakers. The author machine matched these ratings to every word in the immigrant/native speaker transcriptions, and found that the words rated as more concrete were used earlier and more frequently than the words with lower concreteness ratings. That’s Crossley, Kyle, and Salsbury, 2016. I wonder if your data could build on this.