NTPRS 2017 Takeaways

Before having the opportunity to present a couple workshops, my mind was blown quite sufficiently during the week. Overall, the Advanced Track with Alina Filipescu and Jason Fritze got me thinking about aaaaaaaall the things I’ve forgotten to do, or stopped doing (for no good reason) over the years. Thankfully, most of them are going to be soooooo easy to [re]implement. As for the others, I’ll pick 2 at a time to add—not replace—until they become automatic. This will probably take the entire year; there’s no rush!

Jason referred to high-leverage strategies—those yielding amazing results with minimal effort (i.e. juice vs. squeeze), and I’m grateful that he called our attention to everything Alina was doing while teaching us Romanian. ce excelent! I’ll indicate some high-leverage strategies, and will go as far as to classify them as “non-negotiable” for my own teaching, using the letters “NN.” I’ll also indicate strategies to update or re-implement with the word “Update!” and those I’d like to try for the first time with the word “New!” I encourage you to give them all a try. Here are the takeaways organized by presenter:

Continue reading

NTPRS 2017 Resources

Here are links to my Thursday and Friday NTPRS presentations, and related posts for a) those who attended and are interested in reading more, b) those who slept in past 8am (I am slightly envious of that), but wanted to attend, or c) those who weren’t at the conference at all, but find the topics interesting just the same.

Presentations:
NTPRS 2017 – No Prep Grading & Assessment (PPT)
NTPRS 2017 – Same Skills Different Game (PPT)

Related Blog Posts:
No Prep Grading & Assessment

Same Skills Different Game

NTPRS 2017: 10 Workshops On Assessment & Grading!

Assessment & Grading is, by far, the most frequent topic I’m asked about, and this year’s National TPRS Conference features 10 of those workshops on Thursday and Friday! Based on the descriptions, there’s a mix of proficiency people, skill people, tech-tool people, speaking people, rubric people, and more! I’ll be presenting one of those workshops, and have noticed that my thinking is a little different. I do recommend getting to as many of the 10 as you can, so in case you miss out on mine, here’s a brief look at what I’m about…

RLMTL
I have a very simple approach to assessment because the answer is always RLMTL (i.e. Reading and Listening to More Target Language). That is, there is NO assessment I could give that WOULD NOT result in me providing more input. Therefore, my assessments are input-based, and very brief. In fact, what many consider assessments—for me—are actually just simple quizzes used to report scores (see below).

I prefer to assess students authentically.

Continue reading

K-F-D Quizzes

Use these quizzes to satisfy those school requirements that have nothing to do with acquisition, yet everything to do with teaching expectations. K-F-D Quizzes allow you to put a number in the gradebook that builds confidence instead of shattering it, while also providing input. Alternate with something like Quick Quizzes to vary your quiz-types a little bit without any prep.

Continue reading

Assessment & Grading: Game Changers

When teachers complain about their certain practices that create more work for themselves and take time away from students acquiring the target language, my response is usually “well then, don’t use them.” Follow the logic below to arrive at why you need to wrap your head around changing Assessment & Grading practices so that you can use your prep/planning time, and personal life, for more useful and enjoyable endeavors…

Continue reading

Drum Circle Brain Break

After listening to Annabelle Allen on episode 4 of Teachers That Teach, I’m interested in using more Brain Breaks that are shorter.

Despite how awesome some Brain Breaks can be, like Evolution (i.e. the rock/paper/scissor variation of egg–>fledgling–>dragon etc., most of my high school students are “too cool for school” to do a lot of them. Annabelle’s advice of “just do it anyway because he brain of those who don’t participate is still getting a break” only works if there are few who don’t. Even though I warned my students that they’d wear out their favorite ball-tossing Brain Break, “Mumball,” they didn’t listen and now we’ve killed it. At this point, nearly half the class chooses to just sit instead of participating. So, instead of coming back from the Brain Break re-energized for more Latin, energy has dropped to an unacceptable level, at least for the rigor needed to sustain focus in a second language. It’s time for novel, shorter Brain Breaks.

Drum Circle
Stand in a circle, and in place begin stepping side to side at a comfy 70 to 80 bpm (beats per minute) to establish a group tempo. This should feel more like a dance and less like a march. Begin a pattern together, call and response, this side/that side, and/or individuals add on to the pattern—the sky’s the limit!

This shouldn’t get old as fast as other Brain Breaks because of so much variation. Remember, you can tap, clap, snap, rub hands together, and use your thighs, arms, etc. to make sounds. You could also count in the target language (e.g. “ūnus” <step, step, step> “ūnus” <step, step, step> etc.).

Grading vs. Reporting Scores: Clarification

In the recent sliding scale scheme, Proficiency is given 0% weight at the start of the year. This doesn’t mean that students see “0” in the gradebook. What this means is that their 95, for example (which they see in the gradebook), holds 0% weight because in the sliding scale scheme we’ve placed all 100% weight on DEA for first quarter in order to set expectations and establish routines. By the fourth quarter, 100% of the weight is on Proficiency, and whenever possible, we manually change the entire course grade to that final Proficiency number/letter so nothing else averages throughout the year.

NTPRS 2016: More Changes, More Thoughts

After attending iFLT, I spent another week in Reno at NTPRS. While iFLT offered more opportunities to observe teachers teaching students, NTPRS offered more opportunities to actually BE a student for those of us in the Experienced track. I appreciated the short demos that most presenters gave, even when the workshops were not titled “___ language demo.” There are some game changes here that warrant their own posts  (e.g. embedded readings straight from the source, Michele, Whaley), but I have much  else to report on. Like last week’s iFLT post, this one includes more of what I intend to think about and/or change for 2016-17. They’re organized by presenter:

Continue reading

Why “do you understand?” is pointless to ask…

Language teachers usually ask this when something indicates that a student didn’t understand (e.g. verbal response “huh?” or non-verbal response deer-in-headlights expression on face, etc.). If this event has already happened, asking the question serves no purpose. In fact, it might even make the matter worse by putting the student on the spot. The student will likely answer “yes, I understand” just to get their teacher to move on to someone else. Here are some comprehension check alternatives:

1) Did I just say/ask X?
2) Did I just say/ask X or Y?
3) I just said/asked ____.
4) What did I just say/ask? -or- Who can tell me what I just said/asked?

The alternatives above are arranged by questioning level from low to high (i.e. yes/no, either/or, fill-in-blank, open-ended). The questions could certainly be asked in the target language, but one popular strategy is to ask, in English/native, “what did I just say/ask?” to a so-called barometer student, who would be one with the slowest processing speed. This popular strategy is interesting because that kind of question is technically harder to answer than “did I just say/ask X?” It’s probably a non-issue because we’re dealing with the native language, but for the sake of variety, or if you find that your barometer students are struggling, you could start asking those lower level comprehension checks in English/native as well the classic “what did I just say/ask?”

Grading Scheme: DEA & Proficiency

**See this post for all other grading schemes*

Here’s a new idea inspired by advice I was giving on various DEA and Proficiency grading weights. In other posts, I’ve written how my DEA weight has been anywhere from 0% to 50% of the grade. You could also try this sliding scale throughout the year…

Quarter 1
DEA = 100%
Proficiency = 0%

Quarter 2
DEA = 50%
Proficiency = 50%

Quarter 3
DEA = 10%
Proficiency = 90%

Quarter 4
DEA = 0%
Proficiency = 100%

A grading scheme like this would establish very clear expectations of how important it is to exhibit behaviors and routines that lead to language acquisition in class (e.g. Look, Listen, Ask). This would work best if you have the admin support to manually override the final grade with just one Proficiency grade from Quarter 4, as suggested in other iterations of my grading systems. Why? We don’t reaaaaally want the 4 quarters to be averaged, but if they are it’s not the end of the world. This kind of grade is far more forgiving so the focus can be on input and not assessments.
N.B. Proficiency is given 0% weight at the start of the year. This doesn’t mean that students see “0” in the gradebook. What this means is that their 95, which they see in the gradebook, holds 0% weight because in the sliding scale scheme we’ve placed all 100% weight on DEA for first quarter in order to set expectations and establish routines.