Preparation for a road trip down to New Orleans only meant one thing: 5 Pimsleur language learning courses checked out from our local library! To be clear, Pimsleur courses are not effective in the long run, but there’s input nonetheless. Still, how much input is there…really?
Here’s some clarification on related ideas that are often confused:
“Can-Do Statements describe what learners can do consistently over time.” (p. 4)
Don’t use these as your daily objectives. Students can’t meet them after a class hour. If they can, you’ve written them wrong.
“Students will be able to X.”
Don’t spend time on these. These are particular goals for the day, but are largely a fake school thing that have almost no effect on learning, and zero on acquisition (especially if the point is to create a more implicit environment free of metacognicide). Post them if you have to, but use a Google Doc or something (vs. spending any amount of time whatsoever writing on the board). Better yet, use one that could apply to any class (e.g. “Students will understand new words used to discuss [target culture idea].” If someone tries to give you the Wiggins & McTighe “understand is not a good/measurable objective,” just say something in the target language they don’t understand, and draw attention to that). The only people who care about objectives are teachers who buy into skill-building, or teachers who prefer to teach language itself as content matter, as well as administrators who have been told that their teachers need objectives, but not students (see below). If you’re in a real bind, use Terry Waltz’ random objective generator.
“MovieTalk, Team Game, Survey, Quiz, etc.”
The day’s agenda is pretty much all that matters to students. It answers the question “what are we doing today?” and not “what skills will I develop as a result of your planning today’s lesson, o teacher mine?”
Teachers spend far too much time writing Can-Dos and Objectives when just a solid Agenda is needed. This allows maximum flexibility, and affords time to develop strategies to provide CI, as well as writing/adapting texts for the novice—the real high-leverage classroom practices. I’ve been implementing this in the daily & weekly schedule used in the Universal Language Curriculum (ULC).
post scriptum – Objective Traps
Cavē! The tendency to be satisfied—proud, even—with “students being able to X” on any given day has disastrous effects. If the skill or content is isolated, the day’s “mastery” means almost nothing in the long run. Take, for example, the K-12+ Spanish student in highly interactive, yet student-student focused classes (i.e. forced speech paired activities). Despite any success, or meeting of those daily objectives, she might later study abroad in Spain only to find out that she has limited communicative ability, and must undergo a silent period. How did all this—from an A+ student—go unaddressed? It’s simple; all those activities designed to meet objectives gave teachers the wrong impression from the wrong data! Furthermore, teachers tend to USE data like this as evidence when discussing best practices. Don’t fall into that trap!
Unit Test “Mastery” (UTM) is a symptom many teachers and students suffer from. The teacher:
- presents content (Present)
- provides a learning experience (Practice)
- announces an assessment
- assesses students (Produce)
- chooses remediation based on low performance, or moves on
The consequences of UTM is that students appear to “master” the content either right away or after the remediation, which itself is usually misinterpreted as assisting a “struggling student.” The teacher then moves on, and students seldom run into the same content, even from what you might expect from cumulative courses (e.g. one-off math/science concepts, or that perfunctory “transportation unit” in which students are given a vocabulary list for all possible—and likely outdated—ways to get around Madrid, etc.).
This is symptom seriously misleads the teacher. It’s one source of validating teaching practices that don’t actually produce results they seem to be producing. For example, most language teachers attribute their understanding of language to how they were taught, yet they’ve probably just been exposed to the language daily over time, teaching similar (same?) content year after year. This looks like proficiency, yet is probably just daily recall of translated and memorized information!
In reality, communication isn’t really something anyone can master, at least not in the subject-matter-learning sense used in other content areas. There’s a lot of pressure to make language courses fit what’s expected in school, but the model fails when we have inclusive classrooms based on universal human traits, and not intellectualizing language. The best teachers are able to resist that, educating their administration, or at least find the wiggle room to provide input and encourage interaction in a second language during the school day—something all humans are hardwired for.
I encourage everyone to find alternatives to traditional units accompanied by lessons with limited flexibility. Instead, meet students where they are, and move forward. One way to think about curriculum is basing it on vocabulary frequency, but not thematic (e.g. Greetings, Getting Around, Sports, etc.). Chris Stolz has shares how Mike Peto’s entire department has taken this to an extreme with fantastic results! All of these ideas are supported by what Eric Herman has coined “Forward Procedure:”
“Forward procedure is process-oriented. It focuses on where students are. That doesn’t mean you can’t have tests, but those are not pre-determined. They are created in response to what has happened in class and tailored to where students are. If there had to be an element of “standardization” between sections, this would be to agree to use the same test format, but not the same content (e.g., sections hear a different story and do a timed rewrite). Rather than focus on something to cover, it focuses on giving students what they want and need in that moment to learn. It is the approach that makes a teacher most responsible to the learner. In a second language, communicative classroom, this is a much better fit. To quote Savignon (1976): “Above all, remember that for it to be real, communication must be a personalized, spontaneous event. It cannot be programmed – but you can make it happen” (p. 20).
Tasks are becoming popular these days, though I’m not a fan.
The way I see it, a task itself must be so well-constructed that something else—something probably more beneficial—could’ve been done in the same amount of prep time as well as class time. Otherwise, low-prep simple and short tasks tend to lack compelling purposes. After all, there are purposes, and there are compelling purposes, right? For example, most of the Tasks that Bill VanPatten mentions on Tea with BVP are appropriate for self-selecting college students, yet leave the K-12 public school student saying “who cares?” Still, if you’re interested in early input-based Tasks, try using this template…
This task focuses on all conjugated forms of the verb “to be” in the present, and possibly other tenses. It answers the general questions “who am I?” and “who are we?” and could be used to determine a number of qualities shared in the class, and then to compared to some other source, like a target language-speaking culture. Get creative! In the first few steps, students pair up and rotate briefly to get some data. Then, the teacher elicits data in the final steps, compares, and summarizes the findings. Note how students aren’t speaking in the Second Language Acquisition (SLA) Output sense. They’re saying words, sure, but all the words are provided, and any response options are listed. This is not Output since students aren’t generating any of the ideas, which means it could be done very early on given the appropriate level of scaffolding. The first few steps of input-based Tasks are designed to get information, NOT to “practice speaking” like some might refer to, though it will look similar to observers if you are being asked to have students speak more, or interact more. Most of the input will be provided by the teacher in steps 4+.
1) Student A asks:
Quis es? Who are you?
Quālis es? What are you like?
2) Student B replies:
sum [ ] I’m [ ]. **chosen from a provided list of words—the only prep**
3) Students record responses, switch partners party-style, and get more data:
Student B est [ ] Student B is [ ].
4) Teacher asks students to share details::
Student A, esne [ ]? Student A, are you [ ]?
Student A, estne Student B [ ]? Student A, is Student B [ ]?
5) Teacher tallies/graphs results, and makes statements:
discipulī trēs, estis [ ] You 3 students, you are [ ].
discipulī quīnque, estis [ ] You 5 students, you are [ ].
quattuor discipulī sunt [ ] 4 students are [ ].
duo discipulī sunt [ ] 2 students are [ ].
ūna discipula est [ ] 1 student is [ ].
6) Teacher compares class to something:
multī Rōmānī erant [ ] Many Romans were [ ].
7) Teacher summarizes results:
discipulī, sumus [ ] Students, we are [ ].
Again, I don’t necessarily think students care a great deal about these kinds of Tasks, but if you find that something piques their interest, say, who is the closest to turning 18, or who takes the longest naps, break out this task template and see how it goes!
If teachers were to just stop grading grammar, Latin (and other languages) would instantly become more accessible to students, as well as afford more planning time for teachers.
This is no joke.
There are some teachers excited about grammar and want to share that with students. Go ahead! I’m not saying they shouldn’t, but I’ve observed many (all?) of the negative effects of doing so, especially in K-12 public education, which mostly begin with grading. If you want to teach grammar, just don’t grade it. Here’s why…
If someone says that a particular teaching practice doesn’t work (sharing observations, or research), and your assessments indicate otherwise, there are 2 possibilities:
- The other person didn’t have your data set, making a premature claim.
- Your assessments are invalid.
While the former certainly occurs, the latter is more prevalent. For example, teachers typically announce tests on X ahead of time, teach X, then test X. Then, the tendency is to draw the conclusion that students know X, or do X well. This is almost never true. An assessment such as this can only show one thing for certain; who studied X for the test…