The Science Fair and The Troubleshooting Game

It is early February.  E-Rate applications (to help our client schools obtain Federal funding for technology) are due any day now, but nevertheless I am volunteering a bit of time to serve as a judge at a  Science Fair.  After all, everyone wants to encourage students’ interests in Science, Technology, Engineering and Mathematics [STEM].

The project I am discussing with the other judges is going to end up being awarded an Honorable Mention.  The student looked at factors influencing plant growth.  Some plants were given water, Miracle Grow, and sunlight.  Others were given water and sunlight, but no Miracle Grow.  And so on.  Some were just given water, and the last group was given Clorox Bleach instead of water.  The student’s hypothesis was that the plants given water, Miracle grow, and sunlight would thrive and that the Clorox plants would die.  There were weekly measurements of plant height and leaf size over several months.  The display was neat and it was apparent that a computer had been used to prepare a spreadsheet showing all the data including graphical representations of the effect of various factors.  The key elements of “the scientific method,” as prescribed by the Fair rules, were clearly present.  So why was I troubled by the award of an Honorable Mention?  To me, an “interesting” science experiment is one where you offer a hypothesis that stretches known science — at least a little bit — and where your predicted outcome flies in the face of conventional wisdom.  For example, if the Clorox plants had flourished, that would be interesting!  The scoring rules limited our options, but my position was that this project showed no imagination, no questioning of textbook knowledge, no risk-taking, no innovation, no improvisation.  Yes, it did show ability to follow directions, ability to recite the steps of the scientific method, and an all-too-rare work ethic.  I found it troubling that, from the moment this project was conceived, it was a slam dunk; student, parents and teacher could be confident it would earn a good score.  I worry that, too often, such well-intended activities merely enable students to learn “about the scientific method;” they do not necessarily enable students to experience what “being a scientist” is all about.

In our summer camps and after school programs, we play a different game, one that involves troubleshooting with technology.  Teams of students are each allowed to insert one or more faults into a computer system or network.  (We start with one hardware problem and later progress to software problems, network problems, and situations were there are multiple faults.)  Before they are allowed to insert the fault, they must make a prediction as to what the effect will be on system operation.  Then they must prepare a “Trouble Ticket” that makes a clear distinction between observed symptoms and underlying causes.  Another team must then diagnose and repair the fault.  There are many variations of The Troubleshooting Game.  Sometimes, Instructors set up the faults, to illustrate particular issues.  Sometimes, it is a Help Desk scenario, where students must troubleshoot over the telephone, without being able to see the computer screen or other faulty device.  Often it is a timed race.  The grand finale might involve students inserting a complex cluster of faults, after which it becomes the task of the Instructor to face such a challenge with an honest risk of failure.  It isn’t “rigged.”

The Troubleshooting Game of course provides practice for skills ranging from, “make sure everything is plugged in,” to “turn it off, count to ten, and turn it back on” to “listen for the secret phrase.”  However, when these simpler, “triage” tricks do not solve the problem, a more careful reasoning process must then be used.  For example, suppose there are two identical computer systems.  One is working normally, but in this case the screen on the second system is dark.  What might be wrong?  To formulate possible hypotheses about the fault, we must reason backwards from symptoms to possible underlying causes.  Perhaps the brightness was turned down to zero?  Perhaps the monitor is defective?  Perhaps the video card has failed (or is just not seated correctly in the slot)?  Is there an experiment we could devise that would provide evidence supporting some hypotheses but not others?  What about swapping the monitor from the failing computer with the “known good” monitor from the working computer?  If the symptom “follows” the monitor, what should we conclude?  These student-devised experiments must be improvised on the spot, in collaboration with a team and under time pressure.  Again, the activity is not “rigged” because the “right answer” is not known in advance.

Now let’s suppose the “known good” monitor works correctly on the formerly dark computer.  Does this prove that the problem was a bad monitor?  Let’s check!  Let’s try putting the allegedly bad monitor on the known good cpu.  In this case, strangely, it works!  OK, then, let’s try putting the allegedly bad monitor (which has just been shown to work correctly on the good cpu) back on the originally dark computer.  Wow!  Now both computers are working fine!  How can this possibly be?  In this case, the cable connection from the monitor to the computer might have been just loose enough to cause the dark screen symptom.  The very process of doing our fault isolation experiments resulted in correcting the problem.  (This sort of “observer effect” is reminiscent of Heisenberg’s Uncertainty Principle.)

In my view, the experience of students playing our little game feels more like “being a scientist” than at least some of the Science Fair projects I have been asked to judge.  Except for the most trivial or contrived examples, the reasoning involved in troubleshooting corresponds precisely to the steps of the scientific method, as applied to human-designed (rather than nature-designed) artifacts.  Experiments tend to be easily replicated and data tends to be somewhat less ambiguous, which seems fine for students lacking the resources or statistical training to require p < .05 before an “Aha!” moment is allowed.   Students love “The Game” and beg for additional opportunities to play it.

3 thoughts on “The Science Fair and The Troubleshooting Game

  1. You don't say how old the student doing the plant project was. I think it would be a fine project for an elementary school kid. But I agree that the troubleshooting game sounds awesome.

  2. Point well taken, Carol. It was a while ago, so I don't recall the exact age, but I think it was middle school, probably about seventh grade. Also, I have seen many, similar plant growth experiments, at various age levels, over the years, so I have become uniquely uninterested in that topic. I would just like to see an experiment where the outcome is not a foregone conclusion!

Leave a Reply to Mark Miller Cancel reply