“He got one out of three!,” said Phil.
“Wow! Can you believe that?!,” responded Ann.
Did the “He” in this short story experience success or failure? Context makes all the difference in the world, doesn’t it?
I can imagine one context: A teacher on a team is reviewing assessment data, and Phil announces to his team that a student “got one out of three.” The tone could be disappointment and disbelief, indicating that Phil thought the student had more command over what had been assessed. The teammate, Ann, knowing how hard the team has been working on the lesson study and assessment echoes Phil’s consternation. In fact, I’ve heard just such a conversation.
I can also imagine a second context: A young boy relatively new to baseball is talking to his mom about a player hitting one out of three at-bats during a season, as the boy figures out what batting averages of .333 mean versus averages of .250. In this context, the exclamations indicate wild excitement at reviewing the success of the young boy’s friend who made the All-Star team. The mom is reflecting the excitement with a big smile on her face, saying, “Can you believe that?!” In fact, I’ve heard just such a conversation.
As schools examine and employ strategies like project-based learning and design thinking, I believe the stories above can be catalysts for talking about quantitative feedback in context. Why is it that the same fraction and decimal is called “failure” in one context and “success” (great success!) in another? Could it be that many of us have a “movie in our mind” playing – one that shapes our beliefs about what it means to get a one out of three based on experience with traditional quizzes or formative assessments? Could it be that we have come to assume that the content and skills on such assessments should be evaluated in such a way that only 70% and above would be considered “passing?” Considering an ed psych concept like Vygotsky’s ZPD (zone of proximal development) might lead us to believe that the scaffolding and instruction is in misalignment with the student’s learning. In context one, many might view one out of three as a problem.
But in the context of baseball, a 33% means something very different. It involves a mental movie that tells us that one out of three is grounds for Hall of Fame induction if the player can do that consistently over a career. Why is 33% so different in this context? Could it be that the high-quality activity of being face-to-face with a pitcher throwing serious heat causes us to shift our expectations and see 33% in an entirely new perspective and point of view? In context two, many might view one out of three as a celebration.
As schools, when we design project-based learning and design-thinking exercises, how might they be informed, in terms of assessment, by the contrasting contexts of taking a quiz versus standing at bat? Are we putting new wine into old wine skins (please forgive the mix of metaphors) when we apply traditional grading practices and certain quantitative measures to more high-quality, intensive contexts that refuse to be assessed with the same mindsets that have historically been applied in the classroom?
How might we be more purposeful and intentional about the interpretation and context of mathematical feedback?
About 14 months ago, I counseled a group of four boys who said to a colleague and me that they had failed.
“Why do you think you’ve failed, guys?”
“Well, Mr. Adams, we only got 2 out of 10 – 20%. In school, 20% is seriously failing!”
“But in your case, through your project, you helped 2 out of 10 unemployed human beings get a job! In your case, your point of view of 20% might need to shift a bit. Just because 20% on a quiz or a test might have indicated real disappointments and ‘disasters’ to you in the past, a 20% employment-bump statistic in your job-fair project could be seen as a wildly successful outcome. It’s more like a batting average than a vocab quiz. That’s how Ms. G and I see it. You positively changed 2 people’s lives this week. Your ‘20%’ will cause ripples that will send significantly positive waves throughout that community.”
When we in schools apply quantitative measures – 100 point scales, 4 point Guskey scales, whatever kind of scales – I believe we need to do so very thoughtfully and carefully. We need to be proactive about our strategic communications surrounding these assessment measures. Students, teachers, parents – we all bring existing mental movies with us into the school setting.
Even if we don’t apply numerical measures – we did not do so in Synergy in the case of the food-desert, job-fair project – we must be aware of the mental movies and previous experiences that students bring with them to these contexts of project-based learning and design thinking. Those four boys did not receive any kind of “final grade” on that project (our course was non-graded, but heavily assessed), yet they applied previous context to a new situation and drew some profound conclusions about their perceived success. It was a powerful learning moment for me. One that has likely taken me the entire 14 months to fully process.
During the past few years, as I’ve consulted with a number of schools, more than a few are applying relatively traditional grading practices to the assessment of skill sets and dispositions. For example, on a report card or progress report, one might find a column or row labeled “Collaboration” and another labeled “Critical Thinking.” Next to the categories one might find an “82” or a “2 on a four-point scale.” One might also see a “B-” in the scoring cell. Or one might see initials like “PG” – “Progressing.”
I realize I am telling a very incomplete story here. I imagine some readers writing to me in the comments or email or Twitter and saying, “Bo, you’re missing the whole point! High-quality PBL shouldn’t even be getting a quantitative measure. It should be performance-task assessed with only narrative, negotiated feedback. No numbers at all! What’s wrong with you?!” With this post, I really mean to provide a catalysts for thinking and doing with those readers and schools who ARE trying to marry quantitative-assessment measures with high-quality PBL and DT. I, too, have serious questions about the “Why?,” and I am also deeply interested in the “How?” if a school just will not consider non-numerical assessment reporting, even for certain courses, strands, projects, assignments, etc.
Are the challenges we are curating or creating causing us to think deeply about the nature of the challenges relative to assessment? Are we orchestrating experiences that are more like the intensive match up between a super pitcher and a batter – ones in which the quantitative measures we apply communicate All-Star results at “33%?” Or are we trying to place new wine into old wine skins and facilitating experiences that challenge kids so slightly that it’s assured most will “pass” or view their Herculean efforts as failure because we’ve neglected to help everyone involved reconceptualize and pivot perspectives on what “one out of three” might really mean in our context?