# Lack of Curiosity Killed Schrödinger’s Cat

The grand finale of my modeling lab sequence at the University of the Virgin Islands in spring 2021 was going to be a 3D model of the orbitals of the hydrogen atom. Although often portrayed as tiny balls orbiting a bigger cluster of balls, in reality electron orbitals are a lot more complex and interesting. Quantum mechanics show us that they are more like 3-D standing spherical waves. Done right, modeling such a phenomenon lines up nicely with the wave modeling we had done all semester long. Unfortunately, exhaustion and lack of time resulted in me settling for the standard “particle in an infinite box” scenario which is even less interesting than it sounds. It’s basically a vibrating string, which represents a loose electron in its quantum state when confined to a particular area. The activity itself was to find the “quantum numbers” … the allowable solutions to the wave equation for this scenario.

In short, the answers are literally 1, 2, 3, 4, 5 … etc.

To try to obscure how trivial this “grand” finale was, I started by setting the quantum number that students were allowed to manipulate to 0.1. When demoing how to use my code, I switched the quantum number to pi to further obscure the correct answer and choose something that could be reasonable within a scientific lab context. For their actual lab activity, students needed to discover what the allowable quantum numbers were (all integers), how low they went (one), and how high they went (infinity). I figured this would be a fifteen-minute activity at most.

Then something interesting happened.

When I went to grade the labs, I found a diverse set of proposed quantum numbers. The quantum numbers (which, again, are all positive integers) were apparently multiples of 2, multiples of 3, multiples of 5, and even multiples of 10. Because part of the lab activity was to tell me every single assumption they had made (and tested) and why, I had everyone’s chain of logic and so I could understand what thought processes had led my students from their initial guess to these solutions that were *almost* correct, but not quite.

Students’ chain of logic was fairly simple. They would try a low number, like 2 or 3 or 5. They would see that it worked. So they would try a multiple of that number. And it would work. So they would try a few more multiples of that number. And, again, they would all work. Therefore, logically, the solution was any multiple of whatever each student’s starting number was.

The idea of trying a number that *wasn’t* a part of the pattern they thought they had identified was, apparently, unthinkable.

Of course, this was not universally true. A very few students took the opposite tack and, unsurprisingly, were the students who tended to excel in the class. These students would explicitly try numbers that were* not* part of the pattern they thought they had identified, and they explained that they were doing so to see if they were mistaken in their initial assumption. This led those few students to determine that both even and odd numbers worked, but decimals didn’t. They also discovered that negative numbers worked for the equations I had used in the code, which surprised *me* because I hadn’t checked those numbers beforehand since I know they aren’t physically possible solutions (which, as the students pointed out, was not the point of the assignment, only to find numbers that met the mathematical criteria). Only these students, the ones who asked themselves “what if I’m wrong?” found the full solution to the problem.

These few students had conquered confirmation bias, which most other students in the class did not.

A dependable “confirmation bias trap” is something I’ve been trying to accomplish in my classes for a while, and here I had done it by accident. A central goal of my science classes has been to train students to regularly ask themselves the question “what if I’m wrong?” That question leads you to try different things than when you ask yourself the typical question of “how do I show I’m right?” I emphasize the former question at the beginning of my classes, but have had a hard time balancing exploring the consequences of asking “what if I’m wrong?” and the need to cover a certain amount of material in the amount of time I have. As a result, my students continue to approach their assignments by asking themselves the question “how do I show I’m right?” and as a result, fail to explore the full parameter space of a topic.

The fact that students don’t ask themselves “what if I’m wrong?” shouldn’t be surprising. We’ve been training them all their lives to avoid asking that question by penalizing them for every mistake, especially ones in the learning process. Mostly this is for our convenience, as it minimizes grading. But as technology allows us to automate a lot of that rote grading work, we can start reconceptualizing how we can better structure our assignments and classes to not only teach students to ask themselves “what if I’m wrong?” but to also follow the consequences of doing so. In doing so, we can train them to ask that question out of sheer habit and eventually acquire the humility that question engenders.

## Notes for Practice

**Confirmation Bias Traps**

**Keep the activity simple.** Many novice science students are not methodical in their exploration of parameter spaces. With only one variable to explore, it should be easy to ask them to detail what they tried and why.

**Ask for details on their thinking.** Students don’t see the thinking process as important in these kinds of activities. Focusing the activity almost exclusively on articulating the thinking process will provide information you can use for further discussion.

**Trap often.** Traps can be created through particular wording or leaning into students’ preconceptions. The goal is to have students reach the wrong conclusion as a result of not fully exploring the parameter space, which then becomes a powerful learning moment.

**Ask students to identify answers they think may be wrong.** This will help students to begin considering potential mistakes as just another tool to use to explore, rather than something to be avoided.