The bandwagon effect is when people believe or do something just because lots of other people believe or do it, independent of other reasons for believing or doing it (such as empirical evidence or logical argument). Like every science (e.g., astronomy, information theory, quantum physics), ecology has bandwagons. Probably every “hot” topic in ecology, or any science, has a bandwagon-y element to it, because some of the people who work on that topic choose to work on it precisely because it’s “hot”. Indeed, I think it’s difficult or impossible for a topic to become really “hot” unless it’s also a bandwagon, even if there are very good independent reasons for pursuing that topic.
Note that choosing to work on a “hot” topic because it’s “hot” need not indicate a researcher who is a “copycat”, or who lacks ideas of his own, or is just pursuing whatever idea he thinks will get funded, or anything like that. Rather, it’s a natural outcome of how many graduate students (and some others) choose their research projects. When choosing a research project, aren’t you supposed to read widely, see what’s going on in your field and related fields, and identify some important general question, approach, or idea that you can address/apply/modify/build on in your own system? I don’t think that’s a bad thing–it’s a good thing, mostly–but for better or worse, one effect of that way of choosing a research project is that you’re rather likely to jump on a bandwagon.
A scientific bandwagon is a positive feedback loop, a sort of Allee effect or runaway process. How does that process get started? What determines whether or not interest in a topic grows to a tipping point, at which the bandwagon effect takes off? Maybe (hopefully!) interesting, worthwhile, important topics are more likely to reach that point on their merits, so that bandwagons are also likely to be those lines of research with the greatest intrinsic merit. I actually do think that’s part of the story, which means that calling something a “bandwagon” can be a complement.
But intrinsic merit surely isn’t a complete explanation for what gets bandwagons started.* For instance, in my totally unresearched and off-the-cuff opinion**, I think bandwagons in ecology tend to be associated with new research approaches that are, or appear to be, very easy to apply very widely, and which are initially presented by prominent people in a prominent venue. The approach becomes a sort of “recipe” which lots of people try to follow, because who wouldn’t try out an easy-to-follow recipe for a delicious (scientific) pie, especially one first presented by a great chef in the scientific equivalent of Bon Appetit?*** Ecology is hard, so we’re always on the lookout for recipes or shortcuts that promise to make it easier. One example is biodiversity-ecosystem function research, especially that very prominent part of it concerned with the total biomass or productivity of random combinations of different numbers of species. In their most basic form, so-called “random draws” experiments are easy to do, and they were initially advocated by David Tilman in 1996 in Nature. Another example is probably phylogenetic community ecology, specifically the questions, and the approach to answering them, laid out by Webb et al. (2002).**** A third example is probably the use of species-abundance distributions to try to test neutral theory.
At some point of course, bandwagons stop and their riders abandon them. No runaway process can continue forever (although biodiversity-ecosystem function research is giving it a good shot!) Once the low-hanging fruit is picked and the novelty wears off, there’s less good reason to continue riding the bandwagon, and it becomes harder to publish slight variations on the original recipe. At that point, some ecologists (perhaps those who got on the bandwagon for good reasons, rather than because everybody else was getting on) get off.
But until that happens, bandwagons are hard to stop. Off the top of my head, I can’t think of any big bandwagon in ecology that was stopped “prematurely”, for instance because outside criticism convinced those riding the bandwagon to get off before they would otherwise have done so. Maybe the null model wars, which (temporarily) stopped the use of null models and other observational approaches as a means of demonstrating interspecific competition? I’ll be curious to see if recent attacks on phylogenetic community ecology from Losos and Mayfield & Levine have any obvious effect on the trajectory of this bandwagon.
Probably the best reason to think about former and current bandwagons is to gain insight about future bandwagons. So what do you think will be the next big bandwagons in ecology? MaxEnt seems to me to be one candidate, but it came in for a lot of strong (but very constructive) criticism very quickly, which may prevent it from being treated as a “recipe” which lots of people try to follow (see, e.g., the recent special feature in Oikos). Then again, random-draw experiments in biodiversity and ecosystem function research famously became a bandwagon despite, or perhaps even because of, stringent criticism, some of it published in Oikos (Aarssen 1997). Addressing the criticism became an additional motivation for jumping on the bandwagon.
Thinking about former and current bandwagons also helps us think about bandwagons that might have been. Are there lines of research that you’re surprised haven’t become bandwagons? Even if they deserved to become bandwagons?
Really looking forward to hearing comments on this one.
*fn 1: Because if it was, everything I’ve ever done would’ve started a bandwagon! (just kidding)
**fn 2: Sorry, but if you want research and careful thought you’ll have to read my papers. This is a blog.
***fn 3: So…much…pie…metaphor!
****fn 4: The easy availability of online genetic data and software tools for building phylogenies really helped get the phylogenetic community ecology bandwagon started. Which may be a good thing, a bad thing, or somewhere in between, depending on your views of the intrinsic merits of this bandwagon. New datasharing and software tools, designed to make it easy for anyone to apply a new research approach in any context, aren’t necessarily an unmitigated good. Taxonomists are fond of joking that “Nothing is so dangerous as an ecologist with a dichotomous key.” But an ecologist with an R package can be pretty dangerous too.
I think we’re already into a bandwagon involving the assessment of climate change effects on ecosystems. Specifically, the use of empirical data sets with which one hasn’t fully familiarized oneself regarding general and specific characteristics, to evaluate climate change effects that are either aimed at the wrong spatio-temporal scale, are overly simplistic, are not well tied to first principle expectations based on biophysical fundamentals, or combinations thereof. I’m seeing articles published at the highest level journals with these types of problems, and I think it’s likely to only get worse as climate change increasingly occupies the spotlight as the the #1 global environmental issue in the eyes of the public and of policy makers. Some people have a bad habit of testing poorly developed hypotheses/theories with data they don’t really understand and which wasn’t collected for the purpose.
I agree with you that ease of application of methods (and hence, of publishing a paper) drives a lot of this. And where the money is of course.
I entirely agree. Perhaps a slightly different sort of bandwagon than the examples I gave–I don’t know that there’s any single “recipe” that bandwagon-y climate change studies all follow–but absolutely a big bandwagon nonetheless. And yes, driven very much by money.
A recipe in the sense of: “wow, just heard about this [insert favorite positive exclamations here] data set–bet I could test something about climate change with that and get a paper out of it pretty easily”.
Yesterday I emailed the editors at Science to see if they would allow some leeway on their stated 6 month deadline to submit a Technical Comment on a published paper. They said they really couldn’t in this case–they’d already received several such and couldn’t take any more! It was a climate change effects paper using the very same datasets I used in my dissertation (one of which is not very well known, and the intricacies of neither is well known). These people had no idea about the limitations of those data–and they profoundly affect their overall conclusions. But they got a paper published in Science. We need to call this stuff out.
I should add that I am in no way against climate change effects research. To the contrary, I’m very interested in climate change, am involved with a blog run by climate scientists that tries to educate the public on the issues (RealClimate), and I think such research is extremely important. But it has to be done right and it has to be in balance with all the other important ecological issues.
Again, I very much agree.
It’s a rather short step from “Wow, this new method/dataset/R package/etc. could really make my work easier/better” to “Wow, this new method/dataset/R package/etc. could be an easy way to get a quick paper.” But I wonder if that short step is a key step in going from non-bandwagon-y work to bandwagon-y work.
I don’t think there’s any question about it.
Good stuff. Two thoughts.
I think testing neutal theory with SADs IS actually a bandwagon that got stopped. A few papers were necessary because Hubbell claimed it as a central success in his book. But almost the first papers pointed out that this test didn’t have much discriminatory power and argued against this approach. Ten or fifteen did squeak through but then it dried up – almost nobody tries to do that anymore despite the fact it is extremely easy to collect & analyze the data. I agree though that stopping is way less common than running wildly out of control.
I would sugest hierarchical bayesian modelling is another bandwagon. A key signpost of bandwagons is papers that imply their science is superior because of their choice of tool, not because of their question or result. I’ve read many papers (not the original progenitors) who seem to imply their research is superior because they used HBM. This is the opposite of easy though. I think maybe bandwagons are those that depend heavily on technical proficiency (phylogeny, HBM, particular experimental setup) because it looks sophisticated and is exclusive to an inner, trained circle. But in fact, it is much easier to learn a new skill than to do original creative science. Thus picking up a “hot” tool is a good way of faking it.
Thanks Brian. Now that you mention it, I think you’re right that testing neutral theory with SADs is a bandwagon that got stopped by critics. Whether it should ever have gotten started in the first place is another question (you’d think the history of evolutionary arguments about neutral theory would’ve taught ecologists something about what kinds of data can or cannot be used to convincingly reject neutrality…)
You could well be right about hierarchical Bayesian modelling being a bandwagon. Though I don’t think all bandwagons in ecology depend heavily on technical proficiency, at least not difficult-to-acquire technical proficiency. It doesn’t take much technical proficiency to chuck together random combinations of species and measure their total biomass. And modern software makes even sophisticated statistical analyses increasingly easy to perform, at least at the level of “I can feed numbers into R and get it to spit numbers back out” (which is often all that’s necessary for a bandwagon-jumper). As you say, it’s increasingly easy to acquire technical proficiency, or at least the appearance of it, which I’m sure has the effect of feeding bandwagons based on a “hot” technical tool.
A number of our regular commenters are very technically-competent programmers; I’m looking forward to reading what they have to say about HBM and other tool-driven bandwagons.
In my very first post, I remember talking about how modern science is increasingly not data (or technique, or computation) limited–it’s ideas limited. Hopefully the blog, and the journal, function as a vehicle to help folks recognize and relieve that ideas limitation.
Pingback: Links 9/20/11 | Mike the Mad Biologist
Pingback: Advice: choosing a research topic of lasting value « Oikos Blog
Pingback: Cool new Oikos papers « Oikos Blog
Pingback: From the archives: why my papers are like fine wine « Oikos Blog
Pingback: From the archives: bandwagons in ecology « Oikos Blog
Pingback: Take-home messages vs. the devil in the details « Oikos Blog
Pingback: ESA 2012 preview: talks to see on Tuesday (UPDATED) | Dynamic Ecology
Pingback: Can the phylogenetic community ecology bandwagon be stopped or steered? A case study of contrarian ecology | Dynamic Ecology
Pingback: Friday links: do ecologists just reinvent the wheel, NSF grant stats, and more | Dynamic Ecology
Pingback: Selective journals vs. social networks: alternative ways of filtering the literature, or po-tay-to, po-tah-to? | Dynamic Ecology
Pingback: On the tone and content of this blog (feedback encouraged) | Dynamic Ecology
Pingback: We need more “short selling” of scientific ideas | Dynamic Ecology
Pingback: Sherlock Holmes and the Case of the Missing Replication | Theory, Evolution, and Games Group
Pingback: On progress in ecology | Dynamic Ecology
I think a lot of the traction that a bandwagon has comes from major research funding agencies. If the new policy is to fund this or that research, then a lot of researchers will try to make their research bear on the new funding policy.
Pingback: Have you ever abandoned a big line of research? | Dynamic Ecology
Pingback: Steering the trait bandwagon | Dynamic Ecology
Pingback: Poll: can you guess which ecological meta-analyses exhibit a decline effect? | Dynamic Ecology