As I wrote about yesterday, our Intro Bio exams now emphasize higher order thinking skills more than they used to. I think this is really important, and think that the learning gains students achieve during the semester are more likely to stick long-term.
It does lead to some interesting challenges, though. Writing higher order questions is hard — perhaps especially so in ecology. In particular, it can be very hard to come up with several wrong answers that are definitely wrong and can be identified as such by a student who knows and understands the material, but that aren’t trivially wrong. In Software Carpentry instructor training this week, I learned the term “plausible distractors with diagnostic power”. When writing these higher order questions, it can be hard to come up with enough plausible distractors, and really hard to come up with ones with diagnostic power.
As an example: I wanted to write a question last year where I altered the tilt of the Earth on its axis, to see if students could link that with effects on seasonality. (Axial tilt is the reason for the season!) A simple recall question on this topic would be really easy to write. But I wanted to test higher order thinking, so I thought I would write a question where I made the tilt of the Earth more pronounced (because, hey, I can do that on paper!) and see if students could figure out that summers should be hotter and winters colder. But I had a hard time coming up with good distractor answers. For example, I wanted to write a wrong answer related to changing the location of biomes, since that seemed like a distractor that could get at a common source of confusion. But then I thought that it seemed plausible that tilting the Earth differently would alter Hadley cells and therefore alter the locations of biomes. After not coming up with four (or even three – all hail “none of the above”!) plausible but definitely incorrect wrong answers, I gave up on the question.
Lest you think I’m over-thinking how my students will think through the question, consider this example from an exam last fall:
When I wrote this question, I thought it was a good way of testing whether students knew what net primary productivity really is, and whether they could combine that with information on which biome would have the highest NPP. But, based on approximately 953 student questions* during the exam, it became clear that some students who understood the concept were thinking themselves in circles, thinking that the rain in the rainforest would put out the fire, and maybe the deserts would burn the longest because they’re hot and dry, etc. We had to update the question during the exam to tell students that we were setting the fires in a controlled lab environment (wearing proper personal protective equipment, of course.)
Trying to prevent students from over-thinking a question or going down the wrong path sometimes leads to interesting scenarios where we are, for the most part, obsessed with making sure every last detail on an exam is correct, but then completely make something up because we need to in order to make the question tractable.
A great example of this is a short-answer question from last fall. The question started out with this intro blurb:
The question is based on this study (and the image is a modification of one in that paper). I wanted to make sure I had all the info in the opening blurb correct, and even wrote a friend who works on African grasslands to make sure my summary was accurate. After giving them that background, we asked some relatively straightforward questions to try to scaffold them towards a harder question:
Things got more challenging with part C, where we asked them:
(This is clearly a harder question — we deemed it at Bloom’s level 4 — and is a pretty challenging question to ask freshman.)
I then wanted them to need to make a prediction through multiple trophic levels, so wanted to vary the amount of predation by top predators and ask them to predict what would happen to the thorny plants. In the actual study, predation risk for the herbivores depends on the amount of woody cover. I originally wrote the question with varying woody cover, asking the students to predict thorny plant density in habitats with high woody cover vs. low. But it seemed likely that students would get confused by plants both controlling the amount of predation on herbivores and responding to herbivory. So, my colleague Cindee Giffen came up with the excellent idea to introduce basking rocks that would allow us to vary predation on its own. I thought this was brilliant, though we were worried that some students might not know what a basking rock was. So, I added in a picture of lions with manes blowing in the breeze, basking on a rock, and the question became:
I think it was a good question, but still find it amusing that we were, for the most part, completely obsessed with making sure everything on the exam was accurate, but then just completely made up basking rocks for the sake of getting a tractable question.
All of which is to say: I think it’s important to write questions that challenge students to think at higher levels. But it’s not easy!
* This might be a slight overestimate. But it felt like this during the exam.