A fond memory from my childhood helps me understand the ways we observe human behavior and make attributions. It's also one of the more interesting Star Trek tidbits, whether you're a fan or not.
When I was a child, I was fortunate to have a fairly regular regimen of family dinners. My Mom and Dad would come home from work, we'd all sit down to dinner together, and the television cycled through the same programs. Reruns of the original Star Trek (which my Dad obsessively recorded on our VCR until he had every episode) were first, followed by Jeopardy and then Wheel of Fortune. This was, of course, a time when you had to actually be in front of your television to watch things, and used a publication called "TV Guide" to determine when to do so.
Because of these communal experiences, there are many tiny moments that have just become a part of my family's micro-culture. Several words or phrase have an entire mythos, containing a mix of wisdom and humor that are just a part of my story. Perhaps one of my favorites, especially when integrated with my professional life, is the Kobayashi Maru.
What is the Kobayashi Maru?
While the tale varies somewhat between the original Star Trek series (1966-1969) and later films, books, and TV shows, the basic gist is this: The Kobayashi Maru is a simulation that Star Fleet cadets complete as part of their training, whereby they encounter a distressed vessel - the Kobayashi Maru - and seek to provide assistance.
Each of the cadets believes that the scenario is designed to test their mettle as an officer. They are attempting to demonstrate that they can correctly maneuver the ship, manage the crew, or counter enemy attacks as a commanding officer should. However, the entire point of the scenario is that, no matter what the cadet does, they will fail. As Captain Kirk points out to a cadet who was frustrated by the test, "A no-win situation is a possibility every commander may face, has that never occurred to you?" Thus, rather than observe their technical skill, Star Fleet is looking to observe how students respond in such a situation.
As such, among my family, we don't use phrases like "Catch 22" or "no-win scenario;" we say, "This is a real Kobayashi Maru."
As I began to learn about assessment and measurement, I reflected on the Kobayashi Maru and realized that it demonstrated several fascinating points.
First, there's a clear distinction that most people fail to make when it comes to assessment, and that's tasks vs. scoring. Many people bemoan selected response, multiple choice, or "standardized" tests. (The term "standardized" doesn't actually refer to selected response tests, but I won't go down this rabbit-hole right now.) They note that answering this type of question doesn't capture the depth and reality of learning, and that we should instead favor more performance-based assessments, such as group projects, writing samples, or presentations.
The problem is that the task - what the student is doing to demonstrate learning - is not the assessment. It's not until we evaluate that performance that the assessment has been made. If, at this point, we don't have a sound, well-reasoned, and - dare I say it - STANDARDIZED process for evaluating the performance, our entire assessment loses meaning.
Consider the Kobayashi Maru. No one* passes the scenario. Everyone fails, at least in terms of achieving the perceived goal. Yet the behavior of interest is how they respond to this failure. What sort of rubric do you imagine they'd use if this was an actual scenario? How many people would they need to rate cadets reliably? It's not about the task, it's about the scoring that really makes assessment reliable, valid, and meaningful. This is why the Kobayashi Maru is fascinating as a piece of fiction, but would be incredibly difficult to enact in reality.
The second assessment point refers to behavioral observation and inference. This was also demonstrated to me when I was working in R&D at ETS and learned about a fascinating research project. A team had taken a platform used for math assessment, in which students interacted with other actual students as well as simulations of instructors or other students. The idea was to provide a more engaged mode of assessment. But at some point, someone thought to look, not at students' performance on math problems, but at the ways in which they engaged with other students and the simulated students/teachers. Same task, but now used to assess collaboration rather than math skills.
On one hand, it reinforces the task/scoring distinction, but this study also showed me how we can draw multiple inferences from the same performance. Think about observing a presentation. On one hand, we might assess someone's mastery of the content they're presenting. At the same time, we're assessing their presentation skills. When questions are asked, we might also assess their ability to engage with the audience, present information in different ways, or have mastery of content outside of what they'd intended to present.
This to me has always been the "undiscovered country" of noncognitive skills. At present, despite lots of research, we still don't have a great way to determine competency. We can use self-report, Likert-type measures** for a lot of purposes (e.g., diagnostic feedback, prediction of future behavior, aggregate analysis), but we don't have a great way to certify whether or not someone has a particular skill.
The Kobayashi Maru shows what such an assessment might look like. Rather than observing response to failure, how could we put students into a scenario and observe factors like their growth mindset, persistence, sense of belonging, etc.? Moreover, how might we score such a task in a reliable and meaningful way? Sorry, no great answers just yet, but I'm working on it.
What does this teach us about Growth Mindset?
As I was preparing this post and reflecting on the inferences we make - particularly multiple inferences from the same task - I couldn't help but think about growth mindset. Not the growth mindset of students, mind you, but of faculty and teachers.
How often does someone observe a student's inability to complete an assignment or assessment and attribute that performance solely to the student? How often is that a "fixed mindset" attribution?
I recall working on a college campus, and on a Friday afternoon, I remarked that I was preparing to visit my parents in Pennsylvania for the weekend. One student asked, "Is Pennsylvania a state?" Immediately, I was taken aback. "Flabbergasted" would have perhaps been a better description. How could this student not know that Pennsylvania (which was only about 300 miles away, mind you) was a STATE?!?
But then I quickly realized several things. First, this student had been raised by a family (I happened to know her mother and father who both worked at the university), she had (presumably) graduated high school, and been admitted to college. I now reframed the question to myself as, "How could all of these institutions allow her to make it to this point and not tell her that Pennsylvania is a state?!"
I challenge us to all rethink the observations we make and the inferences we draw about students. When a student enters college and has limited literacy or numeracy, rather than indicting the student and assuming they're academically challenged, we should acknowledge that they are a product of their experiences. Perhaps they've simply lacked the experiences that most effectively help them learn math and English. After all, isn't that what college is supposed to do - help students learn?
In the end, the Kobayashi Maru isn't about passing and failing. It isn't about deeming students ready or not. It's about observing how they perform to determine if they're ready to lead, and if they're not, how can they be trained to do so? This is perhaps the final lesson from Captain Kirk, as he takes the moment following the scenario to talk to the cadet, change their thinking, and use assessment as a moment of learning. It's just one of the many ways in which we should all seek to be a little more like him.
*I feel obliged to mention that Captain Kirk did "beat" the Kobayashi Maru, though it was always a mystery as to whether he successfully navigated the test or cheated.
**I would argue that even more "faking resistant" measures like forced-choice items are still not a demonstration of skill. Situational judgement tests (SJTs) are about the closest thing we have, though these often measure a form of knowledge rather than certifying the possession of a competency.