As Jeremy revealed yesterday when announcing the top papers of the 70s, 80s, and 90s, Hurlbert’s 1984 paper on pseudoreplication is not only the most cited ecology paper of the 1980s, it’s the most cited from that whole window, having been cited 4,157 times to date. 4,157! (My number differs slightly from Jeremy’s; it was cited 11 times between when Jeremy looked up the numbers and when I did this past weekend.)
I first read this paper as a grad student, and it was one that came up often during my training as a grad student. I view it as a foundational paper that all students should read, and give it to all of my incoming students. So, when I first got to Georgia Tech, I was a little surprised to realize that it wasn’t routinely assigned; Chamberlin’s “Method of Multiple Working Hypotheses” was the more typically assigned “how to do science” sort of paper among the ecologists. That made me wonder if there are subfield differences in whether people view this as a seminal paper; there may well be (and, if readers have thoughts on this, I’d love to hear them in the comments), but clearly lots of people view Hurlbert as worth reading and citing!
I also assign this paper to undergraduates in classes where we read a lot of the primary literature. (My upper level courses have very little lecturing, and instead have students reading and discussing 1-2 papers from the primary literature each class period.) I usually spend the first class period going over basics of experimental design, as students need a basic understanding of experimental design to be prepared to read and evaluate the primary literature. For that class, I assign Hurlbert (and sometimes also Chamberlin). Students always really love Hurlbert’s paper – I think in part because he goes into specific examples from the literature (which students usually find a little surprising). Plus, who doesn’t like the idea of demonic intrusions? Well, in theory; no one likes demonic intrusions when they occur in his/her own experiment! (In case you aren’t familiar with the paper (in which case – go read it!!!), Hurlbert recommends “eternal vigilance, exorcism, human sacrifices, etc.” as means of reducing or eliminating the effects of demonic intrusions on experiments. He also cautions that “If you worked in areas inhabited by demons you would be in trouble regardless of the perfection of your experimental designs.” Sad, but true.) More seriously, students also like the paper because they finally start to understand a bit more about proper controls and replication. I had multiple students tell me they wish they had read the paper when they were freshman, since they feel like it would have helped with all their lab coursework.
I like to pair reading Hurlbert with an activity that gets students to think more carefully about experimental design, and also to consider constraints. I think it’s very important to talk about what can be done realistically; otherwise, it’s easy for students to have unrealistic expectations about what can be done, which can lead to discussions getting bogged down on experiment critiques. My favorite activity involves taking part of an episode of MythBusters. The episode focuses on whether cockroaches will inherit the earth after a nuclear holocaust. We first watch just enough to know what the question is. I then have the students work in small groups to design how they would do the experiment, both if they have access to a radiation source and if they did not. The latter is where we get into issues of feasibility – yes, the ideal study in this case directly manipulates radiation, but that might be possible. How else could this question be studied? I’ve found that students are remarkably creative – and also that none of them go for what I would have done, which would have been a study based in Chernobyl. (Now that I think about it, I wonder if, when I next do this activity, some students will suggest going to Fukushima.) After we discuss their proposed studies, we watch how the MythBusters crew did it and discuss whether what they did was pseudoreplicated. (This takes more discussion than you might initially imagine.) We then watch the rest of the episode (well, the parts that deal with this experiment). In all, I’ve found it a very fun way to discuss experimental design in an accessible way, and students really enjoy it.
I’d be curious to hear what others do. Do you have undergrads read Hurlbert? How do you teach basics of experimental design (in a class that is not explicitly focused on experimental design or statistics)?
Related post:
Hurlbert rips Sokal & Rohlf
Thank you very much for your interesting post. I just talked about experimental design to undergrad students. I wish that I had read this post before.
Via twitter, Jeramia Ory says “I use their “Are redheads less pain tolerant” episode to teach experimental design to freshman. Extremely flawed, good discussions”.
It’s fun to hear that others use MythBusters to teach experimental design, too!
Although I learned a lot about experimental design during my undergraduate biostatistics courses, I have not read this Hurlbert’s paper until now. Making students read classic papers was unfortunately rather unusual way of teaching at my former university… So, thank you for making me aware of this paper! I think it really is a great a paper which should rank highly on any list of must read papers for students of ecology.
To discuss flaws in experimental design using specific examples of published studies is undoubtedly a good way to make students think carefully about these issues. Actually, also the revelation that even papers in Nature or Science are sometimes fatally flawed is interesting, even shocking, for students and makes them think… I only hope that they do not think that good experimental design is unimportant because examples of flawed papers might suggest that you do not have to design your experiments carefully to publish a highly cited paper in a top journal.
Yes, I think there’s great pedagogical value in assigning flawed papers. I agree with you that it is a good way to make students think more carefully about what they read. In my discussion-based courses, I usually try to have at least one paper where the results don’t actually support what is said in the abstract. Students find that really, really eye-opening.
I don’t have students read Hurlbert, though I mention it at the end of the psuedoreplication lesson. I make a point of not mentioning the concept until they’ve independently discovered pseudoreplication. What I’ve done in recent years is have the the class design an experiment, in small groups, to examine the how copper contamination in soil affects the leaf size and reproduction of an endangered plant. They have a single greenhouse with a few tables, and a total of 100 seeds to use (10 from ten plants).
Every group shares out their experimental design. Every group has some kind of pseudoreplicated element in their design, often in entirely different ways (for example, the distribution of plants on the tables in the greenhouse, how the seeds are allocated to treatments, and the measurement of leaves). I don’t tell them it’s pseudoreplicated, but once everyone introduces their design, I ask each group to evaluate their design relative to the other groups, identifying the best traits of each design, and explain why each feature is better than what they originally planned. With a few leading questions, I can get the groups to infer the pseudoreplication in the design, and then they explain it to one another. (This usually takes about an hour and a half.) Then I mention that before the 1980s, many professional ecologists made the same error in their experimental designs.
I also was thinking about using Mythbusters to teach bad experimental design, the one in which they tried to see if yawns were catching was problematic in so many different ways. It’d be good for an animal behavior class.
This is all great stuff. I’ll be teaching intro biostats in the fall. Previous instructors in this course have assigned Hurlbert, and I may continue to do so. But I’m on the look out for other ideas about how to introduce other common problems with experimental design and statistical analysis.
For instance, there was a good news piece in Nature a couple of years ago on a big kerfuffle over statistical methods in the brain scanning literature. Turns out a lot of the results from this very trendy field are based on undergraduate-level statistical mistakes, like using the data to tell you what hypothesis to test, and then testing that hypothesis on those same data. I have to dig up the link…
And of course there’s this paper, which is totally accessible to undergrads:
https://dynamicecology.wordpress.com/2012/02/16/must-read-paper-how-to-make-any-statistical-test-come-out-significant/
Great post. I have students in my grad stats class read Hurlbert AND Oksanen’s “Logic of experiments in ecology: is pseudoreplication a pseudoissue?”. The students love seeing a debate like this carried out in the literature and walk away with I think a healthy perspective on pseudoreplication.
Every year I survey the students and the Hurlbert paper is one of their favorites. I don’t think it is because they’ve never heard of psuedoreplication nor think pseudoreplication is the single most pressing issue in ecology. I think it is because experimental design gets short shrift in modern statistics teaching and Hurlbert’s paper does a great job of this for reasons already mentioned (evocative phrasing, direct critique of real experiments).
Much experimental design is picked by osmosis from criticism of papers, adviser’s catching design errors, etc. But I think it is important to devote some formal education to it (in my experience most students have never thought about the idea of a BIB balanced incomplete block design or a split plot design or a BACI – before/after comparison design in all of their “osmotic” learning about experimental design). Especially now as mixed models grow and you can set up statistics that exactly match your complex experimental design this is crucial.
And I note again I switched context to graduate stats classes. I don’t think a BIB is important for undergraduates.
The posts over the last few days have been chock-full of great tips for teaching – thanks to all at DE!
Another fun possibility for illustrating problems with stats/Methods, based on mind-reading salmon, can be found here.
This could be because it happened before most of them were born. Just sayin’ 😉
Yep. 🙂
Pingback: Class activities for upper-level courses: blog posts, debates, critiques of media coverage, and more! | Dynamic Ecology
I use Mythbuster’s to teach experimental design/scientific method too! The ‘Exploding lava lamps’ episode has a series of experiments that stem from the results of the previous… plus EXPLOSIONS!
I think the take home message is that MythBusters is really bad at designing experiments and this is a useful thing for teaching experimental design.
I have been struggling to put together a few meaningful assignments for my 3rd ecology students. Since the course is for non-biology majors (most are in health science) and most won’t become scientists, I want them to take away skills for being good “consumers of science”. Today I finally had a breakthrough when I thought, why not let them read my very favourite paper, the one I have always said was totally foundational! Hurlbert 1984 of course! Glad to hear others feel the same way and I look forward to reading all the above suggestions.
Pingback: Friday links: the research conveyor belt, in (modest) praise of impact factor, and more | Dynamic Ecology
Pingback: Two stage peer review of manuscripts: methods review prior to data collection, full review after | Dynamic Ecology
Pingback: Friday links: community assembly vs. Go, Hurlbert vs. neuroscientists, and more | Dynamic Ecology
Pingback: Is requiring replication statistical machismo? | Dynamic Ecology
Pingback: The mid-grad school doldrums | Dynamic Ecology
Pingback: How many terms should you have in your model before it becomes statistical machismo? | Dynamic Ecology
Many of the discussions and queries regarding pseudoreplication in the current literature and on the internet refer only my initial 1984 paper and seem unaware of many later clarifying papers by myself and my colleagues that focus partly or completely on the topic. These are listed below. Pdfs of most of these can be accessed at my university website, at http://www.bio.sdsu.edu/pub/stuart/stuart.html
Reading of these may be helpful to researchers. It is regrettable that confusing or simply fallacious re-definitions of the “sin” are so prevalent in articles, books, and on the internet. Be careful who you accept as your “statistical gurus” and of all that you see on the glossy pages of “reputable” journals!
Hurlbert, S.H. 1990. Pastor binocularis: Now we have no excuse [review of Design of Experiments by R. Mead]. Ecology 71: 1222-1228.
Hurlbert, S.H. and M.D. White, 1993. Experiments with invertebrate zooplanktivores: Quality of statistical analyses. Bulletin of Marine Science 53:128-153. PDF
Hurlbert, S.H. 1993. Dragging statistical malpractice into the sunshine [Citation Classic: Pseudoreplication and the design of ecological field experiments]. Current Contents 1993:18.
Lombardi, C.M. and S.H. Hurlbert, 1996. Sunfish cognition and pseudoreplication. Animal Behaviour 52:419-422 PDF
Hurlbert, S.H. and W.G. Meikle. 2003. Pseudoreplication, fungi, and locusts. Journal of Economic Entomology 96:533-535.
Hurlbert, S.H. 2003. On misinterpretations of pseudoreplication and related matters: a reply to Oksanen. Oikos 104:591-597. PDF
Hurlbert S.H. and C.M. Lombardi. 2004. Research methodology: experimental design, sampling design, statistical analysis. In M.M. Bekoff, (ed.), Encylopedia of Animal Behavior, 2:755-762. Greenwood Press, London. PDF
Kozlov, M. and S.H. Hurlbert. 2006. Pseudoreplication, chatter, and the international nature of science. Journal of Fundamental Biology 67(22):128-135. [In Russian; English translation available as pdf]. PDF
Hurlbert, S.H. 2009. The ancient black art and transdisciplinary extent of pseudoreplication. Journal of Comparative Psychology 123:434-443 PDF
Hurlbert, S.H. 2010. Pseudoreplication capstone: Correction of 12 errors in Koehnle & Schank (2009). Department of Biology, San Diego State University, San Diego, California. 5 pp. PDF
Hurlbert, S.H. 2013. Pseudofactorialism, response structures and collective responsibility. Austral Ecology 38: 646-663. PDF + suppl. inform. PDF
Hurlbert, S.H. 2013. Affirmation of the classical terminology for experimental design via a critique of Casella’s Statistical Design. Agronomy Journal 105: 412-418 + suppl. inform. PDF
Hurlbert, S.H. 2013. [Review of Biometry, 4th edn, by R.R. Sokal & F.J. Rohlf]. Limnology and Oceanography Bulletin 22(2): 62-65. PDF
Pingback: What ecology labs do you remember from when you were a student? | Dynamic Ecology
Pingback: If your field experiment has few replicates (and it probably does), intersperse your treatments rather than randomizing them | Dynamic Ecology
Pingback: Revisiting Hurlbert 1984 – Reflections on Papers Past