It’s been years since I’ve given a quiz. I know that seems crazy coming from a teacher, but there are just so many other ways to get evidence of learning, like The Monitor Assessment, that I haven’t had to bother with quizzes much at all. When I did give them, they were sneaky ways of reading and rereading. In other words, all my quizzes were input-based. This meant that the learning experience (i.e., of receiving input) took place during the assessment. In the literature, this is known as an UNOBTRUSIVE assessment, whereas an obtrusive one would be when there’s an abrupt stop to input and interaction so testing can occur. This is bad. It literally takes away time from learning, an no one wants (or needs) that. A couple examples of obtrusive assessments would be like pulling kids into the hall for some speaking test while who-knows-what is going in the classroom, or holding a “unit test day” that’s really just 20min of testing, then free time or busywork for those who finish. With unobtrusive input-based assessments, however, the learning (i.e., receiving of input) continues, and it’s not a complete waste of time.
I enjoy not wasting time. Don’t you?
Assessment & Grading is, by far, the most frequent topic I’m asked about, and this year’s National TPRS Conference features 10 of those workshops on Thursday and Friday! Based on the descriptions, there’s a mix of proficiency people, skill people, tech-tool people, speaking people, rubric people, and more! I’ll be presenting one of those workshops, and have noticed that my thinking is a little different. I do recommend getting to as many of the 10 as you can, so in case you miss out on mine, here’s a brief look at what I’m about…
I have a very simple approach to assessment because the answer is always RLMTL (i.e. Reading and Listening to More Target Language). That is, there is NO assessment I could give that WOULD NOT result in me providing more input. Therefore, my assessments are input-based, and very brief. In fact, what many consider assessments—for me—are actually just simple quizzes used to report scores (see below).
I prefer to assess students authentically.
Most tricky questions are the misguided product of a teacher thinking they’ve created a valid or rigorous assessment. Validity is when the assessment measures what it’s supposed to measure. This usually means that assessments show that students know what was taught. When it comes to teaching a language, teachers lacking Second Language Acquisition (SLA) training tend to select the wrong thing to be measured (e.g. grammar, cultural facts, etc.). These things usually include tricky details, which lead to tricky questions. Validity then becomes an issue when these teachers use such assessments as evidence that they successfully teach “communicatively” or “for fluency,” when they’re only assessing memory and knowledge about the language system and its speakers. Rigor then muddles things up.
Rigor is not well defined in most school systems, but people (i.e. parents, admin, evaluators, colleagues, etc.) seem confident when they BELIEVE it’s not there. As such, teachers are under pressure to create assessments that seem rigorous, but these assessments just end up being longer (i.e. obtrusive), complex, and downright sneaky. Here’s an example I lifted from a teacher’s assessment. It’s a weak example, but serves our need for the purpose of discussion: