For years, my go-to teacher eval goal has been for students to increase their timed write word counts by X% (like 20%, which always happens), which includes selecting one or two practices to improve that allow CI to be provided, and contribute to the goal (e.g. establishing rules & routines, consistently using brain breaks, writing more embedded readings, etc.). In my experience, it’s not necessarily the results that lead to good evaluations, it’s how everything is analyzed. That is, a thorough analysis is more important than every student meeting the eval goal. Thus, this post. Hey Principal HD, #shoutout!
Next year, I’m looking forward to a new goal of increasing the input I’m providing, but to wrap up this year’s analysis, here are some stats and insights…
What’s the timed write process? Students had about 12 weeks of input before their first 5 minute timed write (no announcement, no story retells, no notes, no assistance) using this paper, wrote several over the next 12 weeks, did a midterm analysis, then wrote for another 12 weeks and had a final analysis (most had 7-8 total). I did NOT plan how evenly spaced this would be, but I like it! First, here’s a a brief profile of each class section to give some context…
The Blues, a class of 19 fast processors, were a very enjoyable class to teach. There was good mojo in there, with no apathy in sight. They even wanted to make shirts with their class name on it, and passed notes to other class sections; there was THAT kind of buy-in. However, they started slipping after midterms far more than I would’ve expected considering how strong they began. The amount of input they received was way down from the start of the year.
The Whites, also with 19 students, had a very rocky start to the year with 5 days of school released/canceled due to excessive heat and humidity, all within the first 9 days of school. Lacking rules and routines that ought to be established at that critical time of year, students showed far less confidence in this class, mojo was a bit off, and students were the most apathetic.
Once a class of just 11, The Reds surged to an ideal 16 by the end of the year. They were the most eager to “play ball,” and also had the most spontaneous use of language. At one point, I couldn’t distinguish them from Veneta. However, MGMT became an issue, making this class the most difficult to manage after midterms. Therefore, the star sections, Veneta and Russāta, started falling behind in terms of input received.
This class, The Greens, was my largest section, at one time twice the size of the smallest, though larger by just 2 students by the end of the year for a total of 21. This particular mix of students produced a very rowdy bunch that was on the lower side of input received due to MGMT needs. I expected this class to have the lowest stats. You can guess the twist coming.
The following stats & insights use Prasina as the main example, with the other sections compared in italics:
- Starting word count had a low of 10, and high of 39. This was remarkably close to Veneta with 10 and 44. Albāta was notably lower with 4 and 35, and Russāta was right there with them at 4 and 32.
- Ending word count increased to 27, and 92. Veneta, again, was remarkably close with 26 and 98. Albāta increased to 15 and 74, and ussāta increased the low end to 25 and the same high of 75.
- Average word count was 20, increasing to 39 at midterms, and 48 by finals. For Veneta, averages were slightly higher at 26, 44, then 56. Albāta trailed with 15, 30, then 41, reaching only the midterm level of the other two sections. Russāta was surprisingly close with 17, 38, 42.
- 4 students reported that they “read a lot.” There were 5 from Veneta, but just 2 from both Albāta and Russāta.
At this point, it appears that Prasina and Veneta are the top performing sections, and Albāta and Russāta the low. These aren’t expected given the different class profiles.
- Overall growth for the highest individual was by 54 words, with an average of 27. Veneta was remarkably close, once again, with 54 and 30, as well as Russāta with 50, and 27. For Albāta, the high was 62, with similar average of 25.
Despite differences in class profiles, the average growth was very close across all class sections.
- 6 students peaked by midterms, never increasing their maximum word count the second half of the year. Just one Veneta student peaked. As for Albāta, 3 students peaked. A lot more from Russāta—over a quarter actually—peaked at this time.
- Most students made large gains before midterms. Just 3 students made the majority of their gains afterwards. Between 2 and 4 students from the other sections did the same.
- Overall gains throughout the year were by 162%. For Veneta, it was 129%, and Russāta 194%. Albāta students increased their word counts by 226%!
- 2 of the top 3 writers reported that they “read a lot.” Just 1 of the top Veneta and Albāta writers reported that they “read a lot.” None from Russāta.
These stats reflect a bit of how Veneta and Russāta lost momentum. In fact, Veneta had the lowest overall gains the entire year (almost half as much as Albāta)! This suggests that Veneta’s success at the start of the year was riding on innate fast-processing, but as reading habits slipped, and less input was received after midterms, their strengths began to disappear amongst others developing proficiency. Russāta was second in overall gains, but a decent amount of that took place before midterms.
**If anything, a major takeaway from all this could be that CI certainly levels the playing field; those receiving it could write as much as those who always had it from the start.**
It’s possible that reading was the main equalizer, with students in Prasina making up for lost class input by reading, eventually catching up in timed write totals with Veneta. Albāta and Russāta each had only 2 students report that they “read a lot.” This might explain their similarity in highest words written (75).
Also, I’m almost tempted to claim that students who don’t read (from any class) will still end up being able to produce ~25 words after a year (120 hours) of input, and those who do read will produce 3-4x that much. I’m tempted, but won’t do that. Definitely need more data to make any claims.
Now, if these timed writes are any indication of acquisition and subsequent proficiency—which they might not be—it appears that most of it occurs before holiday break, and then students begin to slow down in what they can produce. No surprises, right? Acquisition isn’t linear, and there’s no reason to expect proficiency is, either. We already know that students jump up through ACTFL’s Novice levels, but then hang out for a bit at Intermediate, for example, and a lot of this data confirms that. But writing is independent from comprehension…
Reading is important.
Reading was the primary factor affecting comprehension levels. Students who were on-task and reading got better at reading. Those who weren’t, remained slow readers and slow processors of language. We certainly read a LOT of Latin throughout the year during class, but I could only hope to influence what was read at home. Students who read at home more received more CI, and understood more in class. That’s plain and simple.
Still, I wouldn’t claim that the even those reading a lot were able to produce the most Latin. Some of them did write the most, and others reading at home made the most gains. Then again, many students who read at home wrote an average amount of words, or made average gains. Why? There are individual differences governed by the internal syllabus. Teachers and students have very little control over that. Some of the students writing the most words were the fastest processors of language during class interactions, yet others writing the least were fast processors who haven’t reached a point of producing much language.