Formative assessment IS design thinking. #DTk12Chat

Teachers are designers. Either intentionally or unintentionally, maybe. But teachers are designers.

Some feel that the word, the title, “designer” is being co-opted by too many industries and sectors and professions. But how could one really deny the essence of teacher as designer.

Teachers design with curriculum, learning environment, instructional methodology, and assessment. Together, these elements create pedagogical design.

Because of the heightened attention that design and design thinking are getting, we know more about how great designers design with the needs of the user clearly at the center of the design. Discovery, ethnography, examination, observation, interview – all of these and more are the tools of great design and design thinking.

For the truly intentional, great teachers, formative assessment is an invaluable tool – a system really – to discern the deepest needs of the user… the “student.” Through purposeful use of formative assessment, great teachers – great pedagogical designers – collect critical information by way of discovery (assessment), ethnography (assessment), examination (assessment), observation (assessment), interview (assessment), etc.

But, for these assessments, these tools of discovery and empathy, to be design-employed, the insights gained must be used to inform and transform the pedagogical design for the improvement of the user experience. Better known as “deep learning.”

If an assessment is merely something at the end of instruction to provide a grade for a paper grade book or digital SIS (student information system), then enormous potential is being wasted, underutilized, undervalued. Assessment, used as design tool, can form better design for curriculum, instruction, learning environment, assessment, etc. To reach this potential, though, we need to be intentional as designers.

If you are pursuing design thinking at your school, perhaps you are using the d.School model:

  • Empathize, Define, Ideate, Prototype, Test.

Or perhaps you are using the model from Design Thinking for Educators:

  • Discovery, Interpretation, Ideation, Experimentation, Evolution.

At Mount Vernon, we’ve developed our own model of design thinking:

  • DEEP – Discover, Empathize, Experiment, Produce.

Or perhaps you are working to nurture and build innovators and tracking with such work as Innovators DNA, purposefully infusing the known traits of innovation:

  • Observing, Questioning, Experimenting, Networking, Associating.

Among all of these models, and among the practices of the most highly respected designers and design thinkers, empathy lives at a core – through intentional and purposeful discovery, observation, and ethnography – in order to enhance and improve design for the needs of the user.

Assessment – formative assessment – is essential for one to be a design-intentional teacher.

How are you using assessment as a systemic tool for exceptional design? For the user experience? For the learners?

#ItsAboutLearning

Numbers Count: Contextual Assessment and Quantitative Measures in #PBL #DTk12

“He got one out of three!,” said Phil.

“Wow! Can you believe that?!,” responded Ann.

Did the “He” in this short story experience success or failure? Context makes all the difference in the world, doesn’t it?

I can imagine one context: A teacher on a team is reviewing assessment data, and Phil announces to his team that a student “got one out of three.” The tone could be disappointment and disbelief, indicating that Phil thought the student had more command over what had been assessed. The teammate, Ann, knowing how hard the team has been working on the lesson study and assessment echoes Phil’s consternation. In fact, I’ve heard just such a conversation.

I can also imagine a second context: A young boy relatively new to baseball is talking to his mom about a player hitting one out of three at-bats during a season, as the boy figures out what batting averages of .333 mean versus averages of .250. In this context, the exclamations indicate wild excitement at reviewing the success of the young boy’s friend who made the All-Star team. The mom is reflecting the excitement with a big smile on her face, saying, “Can you believe that?!” In fact, I’ve heard just such a conversation.

As schools examine and employ strategies like project-based learning and design thinking, I believe the stories above can be catalysts for talking about quantitative feedback in context. Why is it that the same fraction and decimal is called “failure” in one context and “success” (great success!) in another? Could it be that many of us have a “movie in our mind” playing – one that shapes our beliefs about what it means to get a one out of three based on experience with traditional quizzes or formative assessments? Could it be that we have come to assume that the content and skills on such assessments should be evaluated in such a way that only 70% and above would be considered “passing?” Considering an ed psych concept like Vygotsky’s ZPD (zone of proximal development) might lead us to believe that the scaffolding and instruction is in misalignment with the student’s learning. In context one, many might view one out of three as a problem.

But in the context of baseball, a 33% means something very different. It involves a mental movie that tells us that one out of three is grounds for Hall of Fame induction if the player can do that consistently over a career. Why is 33% so different in this context? Could it be that the high-quality activity of being face-to-face with a pitcher throwing serious heat causes us to shift our expectations and see 33% in an entirely new perspective and point of view? In context two, many might view one out of three as a celebration.

As schools, when we design project-based learning and design-thinking exercises, how might they be informed, in terms of assessment, by the contrasting contexts of taking a quiz versus standing at bat? Are we putting new wine into old wine skins (please forgive the mix of metaphors) when we apply traditional grading practices and certain quantitative measures to more high-quality, intensive contexts that refuse to be assessed with the same mindsets that have historically been applied in the classroom?

How might we be more purposeful and intentional about the interpretation and context of mathematical feedback?

About 14 months ago, I counseled a group of four boys who said to a colleague and me that they had failed.

“Why do you think you’ve failed, guys?”

“Well, Mr. Adams, we only got 2 out of 10 – 20%. In school, 20% is seriously failing!”

“But in your case, through your project, you helped 2 out of 10 unemployed human beings get a job! In your case, your point of view of 20% might need to shift a bit. Just because 20% on a quiz or a test might have indicated real disappointments and ‘disasters’ to you in the past, a 20% employment-bump statistic in your job-fair project could be seen as a wildly successful outcome. It’s more like a batting average than a vocab quiz. That’s how Ms. G and I see it. You positively changed 2 people’s lives this week. Your ‘20%’ will cause ripples that will send significantly positive waves throughout that community.”

When we in schools apply quantitative measures – 100 point scales, 4 point Guskey scales, whatever kind of scales – I believe we need to do so very thoughtfully and carefully. We need to be proactive about our strategic communications surrounding these assessment measures. Students, teachers, parents – we all bring existing mental movies with us into the school setting.

Even if we don’t apply numerical measures – we did not do so in Synergy in the case of the food-desert, job-fair project – we must be aware of the mental movies and previous experiences that students bring with them to these contexts of project-based learning and design thinking. Those four boys did not receive any kind of “final grade” on that project (our course was non-graded, but heavily assessed), yet they applied previous context to a new situation and drew some profound conclusions about their perceived success. It was a powerful learning moment for me. One that has likely taken me the entire 14 months to fully process.

During the past few years, as I’ve consulted with a number of schools, more than a few are applying relatively traditional grading practices to the assessment of skill sets and dispositions. For example, on a report card or progress report, one might find a column or row labeled “Collaboration” and another labeled “Critical Thinking.” Next to the categories one might find an “82” or a “2 on a four-point scale.” One might also see a “B-” in the scoring cell. Or one might see initials like “PG” – “Progressing.”

I realize I am telling a very incomplete story here. I imagine some readers writing to me in the comments or email or Twitter and saying, “Bo, you’re missing the whole point! High-quality PBL shouldn’t even be getting a quantitative measure. It should be performance-task assessed with only narrative, negotiated feedback. No numbers at all! What’s wrong with you?!” With this post, I really mean to provide a catalysts for thinking and doing with those readers and schools who ARE trying to marry quantitative-assessment measures with high-quality PBL and DT. I, too, have serious questions about the “Why?,” and I am also deeply interested in the “How?” if a school just will not consider non-numerical assessment reporting, even for certain courses, strands, projects, assignments, etc.

Are the challenges we are curating or creating causing us to think deeply about the nature of the challenges relative to assessment? Are we orchestrating experiences that are more like the intensive match up between a super pitcher and a batter – ones in which the quantitative measures we apply communicate All-Star results at “33%?” Or are we trying to place new wine into old wine skins and facilitating experiences that challenge kids so slightly that it’s assured most will “pass” or view their Herculean efforts as failure because we’ve neglected to help everyone involved reconceptualize and pivot perspectives on what “one out of three” might really mean in our context?

2012 in review – the auto-prepared annual report of It’s About Learning

The WordPress.com stats helper monkeys prepared a 2012 annual report for this blog.

Here’s an excerpt:

4,329 films were submitted to the 2012 Cannes Film Festival. This blog had 23,000 views in 2012. If each view were a film, this blog would power 5 Film Festivals

Click here to see the complete report.

[Note: I’ve read a few blog self-analysis, “year in review” posts from other bloggers that I follow. However, I have resisted doing the same exercise for this blog. Then, yesterday, I received an email from WordPress explaining that their “stats helper monkeys prepared a 2012 annual report for this blog.” At the conclusion of the annual report was an option to make the stats public and add as a blog post. I’m just trying out this feature.]

[Note #2: Here’s the automated annual report for PLC-Facilitators: Learning is the Focus, a blog that @jgough and I maintained for our work with the PLC facilitators at The Westminster Schools.]

Self-Awareness & School Change: @GrantLichtman #EdJourney, episode 5, week 4

From Grant Lichtman’s chapter in The Falconer entitled “Step 2: The Boundaries of Subjectivity and Objectivity.”

Sun Tzu says, “So it is said that if you know others and know yourself, you will not be imperiled in a hundred battles; if you do not know others but know yourself, you will win one and lose one; if you do not know others and do not know yourself, you will be imperiled in every single battle.

And…

Acceptance that something is possible opens a lot of doors to creative thinking.

Most recently, Grant’s #EdJourney blog posts on The Learning Pond have touched repeatedly on schools that voluntarily invite regular reviews from visiting colleagues – schools that practice vigorous feedback looping and self-awareness. On another thread, Grant has been reporting on a series of schools that are rethinking and/or abandoning the AP (Advanced Placement) tests. Our week 4, episode 5 video-interview below concentrates on these stories…

A dashboard for the 7C’s – metrics for pedagogical master planning

I’m just playing with strands of ideas here…imagining one possible weave or braid.

Strand 1: 10,000 Hours

In Malcolm Gladwell’s Outliers, as well as in earlier work by Howard Gardner, the 10,000-hour rule is posited. Essentially, to become expert, or deeply disciplined and proficient, one typically must commit to at least 10,000 hours of dedicated practice. Hold that thought for a minute…like you’re holding one strand between your fingers.

Strand 2: Tracking Time

Not too long ago, I wrote about tracking my time at Unboundary, and I imagined what a similar practice of tracking time might be like in schools. Now, hold this second strand between another set of mental fingers.

Strand 3: The 7 C’s

In Trilling and Fadel’s 21st Century Skills: Learning for Life in Our Times, the authors advocate for the traditional 3 R’s (reading, writing, and arithmetic), as well as 7 C’s:

  1. Critical thinking and problem solving
  2. Communications, information, and media literacy
  3. Collaboration, teamwork, and leadership
  4. Creativity and innovation
  5. Computing and ICT literacy
  6. Career and learning self-reliance
  7. Cross-cultural understanding

Now, we can braid and weave.

Do we know how much time our students – the individual students – spend engaged in these seven activities? If a parent asked me, “Bo, I’ve been reading and listening about 21st C education. Can you tell me how much time your students spend in the 7 C’s? Can you explain some examples of how they might engage in the 7 C’s?”

I think I could knock the second question out of the park. I would totally strike out on the first question.

What if we had some sort of “dashboard” that could show us how much time our students are spending in these various C’s? Yes, you know…like the dashboards in our cars.

In our cars, the dashboards give us real-time feedback on speed, oil pressure, engine temperature, fuel remaining, battery voltage, etc. In 2012, couldn’t we have some sort of tech-enabled dashboard for how much time students are actually getting to immerse themselves in and practice the 7 C’s? It’s so easy now for me to examine how I spend my time at work by using the time tracker. I can see what projects I am working on, I can review what and how I am researching, and I can understand where I might need to rebalance my time allotments.

Wouldn’t it be insightful and informative to know, even if just for one day or one week or one month, how much time a student…

  • sits in lecture passively listening
  • practices communicating with an authentic audience
  • engages in collaborative problem-solving for a real-world problem (like a school’s recycling versus trash quandary)
  • participates in 3D printer activity to create something useful via Maker methods

By looking at the dashboard, I could see how close my son PJ is getting to 10,000 hours in “Creativity and innovation.” I could review how much time he is getting to engage in “Communications, information, and media literacy.” We could make some great, informed adjustments with this information. Just like we know when to stop for gas, when to adjust our speed, when to add oil to our car.

As a school we could examine aggregates and grouped data. We could look at departments to see if one department contributes more to certain C’s and another department contributes more to a different sub-set of C’s. We could see our bright spots and our areas for growth.

There could even be an app for that!

Driving without those gauges and instrument panels on the dashboard could cause a disaster! Using our dashboard makes us a better driver…and helps us get to where we are trying to go with greater success.

Developing and utilizing such tools could really help a school trying to create its finely tuned pedagogical master plan!