Recently I polled y’all on which of the many purported problems with the conduct of ecological research are actually problems. For each of 24 purported problems in ecological research, respondents were asked if it was a serious problem, moderate problem, no/minor problem, or opposite of a problem.
Here are the results! They’re very interesting! You should totally read on!
- 115 respondents (thanks to everyone who responded!). As always with our polls, that’s not a random sample of any well-defined population. It’s not even a random sample of our readers. But it’s a large and diverse enough sample of ecologists to be worth talking about.
- 17% grad students, 25% postdocs, 43% faculty, 13% non-academic professional ecologists, 2% other. Grad students are rarer in this poll than in most of our other polls. Presumably because many grad students haven’t yet formed opinions on field-wide problems in ecological research.
- 37% fundamental researchers, 13% applied researchers, 50% both.
- The vast majority of respondents use multiple approaches in their own research. 77% collect observational field data, 64% compile/synthesize data collected by others, 52% do sophisticated statistics, 51% do computational modeling, 51% do field experiments, 39% do mathematical modeling/theory (aside: which surely means theorists are overrepresented in this poll compared to ecology as a whole, right?), 37% do lab studies/microcosms/mesocosms.
Results and commentary
Here’s the first result: Most ecologists think ecological research has at least a couple of serious problems. The median respondent identified 4 problems from my list of 24 as serious problems (mean 3.9, max 17). Only 19/115 respondents didn’t identify any of the problems on my list as serious problems.
To enable further analysis and graphical display, I converted the responses to numbers: 2 for “serious problem”, 1 for “moderate problem”, 0 for “no/minor problem”, -1 for “opposite of a problem”. Yes, that conversion is arbitrary, but it’s fine for purposes of a blog post. The results don’t change all that much if you analyze the data some other way.
Here’s a plot of the mean and variance of responses for each purported problem. Each point gives results for one purported problem, a bunch of the extreme ones are labeled:
This plot is super-interesting!
- Even the most serious problems aren’t all that serious in the eyes of the respondents. No purported problem on the list was considered to be a serious problem by more than 41% of respondents. There was only one purported problem for which “serious problem” was the modal response. And no problem had a mean score higher than 1.2, which is just a bit higher than “moderate problem” on my admittedly-arbitrary numerical scale.
- There’s a lot of disagreement about the seriousness of most purported problems in ecological research. For all but three purported problems on the list, every possible level of seriousness was chosen by at least one respondent, and for the other three every level except “opposite of a problem” was chosen by some respondents. Even for the purported problems on which there’s the greatest level of agreement as to their seriousness, there’s still a fair bit of disagreement in an absolute sense. For instance, respondents mostly agreed that underpowered studies are a moderate problem in ecological research: that was the modal answer, by far. But even there, 21% of respondents think underpowered studies are a serious problem, vs. 23% who think they’re no/minor problem. Some of this disagreement might reflect heterogeneity across subfields of ecology. Respondents were asked to base their responses on the areas of ecological research with which they were sufficiently familiar to have an opinion. And some of it reflects variation among ecologists using different approaches; see below for more on that.
- No matter what it is, at least 5% of ecologists think it’s a serious problem. Every purported problem on the list was considered a serious problem by at least 5% of respondents. Now I’m kicking myself for not including some non-problems on the list as controls. Maybe 5% of ecologists think cute puppies are a serious problem. 🙂
- According to respondents, the most serious problems in ecological research are HARKing/p-hacking/cherry-picking data/garden of forking paths and vague/unclear/non-operational terms and concepts. To which, yeah, I think I buy that. At least, I buy it as much as I’d buy any other picks, and more than I’d buy some other picks. I do wonder a bit if this poll oversampled ecologists who worry about HARKing/p-hacking/etc. Our readership overlaps substantially with readership of Andrew Gelman’s blog, and he’s always harping about those problems. I disagree with the crowd regarding the next two most-serious problems, though. I personally wouldn’t say that overvaluing novelty is the third most-serious problem on the list, though I’m also very unsure how serious a problem it is. And as I discussed in an old post, I don’t think undervalued/insufficient natural history is really one problem. I think it’s better thought of as a bunch of separate purported problems, most of which I personally don’t think are problems at all. YMMV, of course. That’s what makes this whole post topic interesting to talk about: it’s not just a matter of purely subjective personal opinion, but yet there’s scope for reasonable disagreement.
- According to respondents, the least-serious purported problem in ecological research is “overvaluing meta-analysis compared to collecting one’s own data”. This was the purported problem for which votes for “opposite of a problem” outnumbered votes for “serious problem” by the widest margin, and it wasn’t close. It’s also the purported problem that got by far the most votes for “opposite of a problem”: 29%! I voted for “no/minor problem” myself. I can’t see how anyone could think over-valuing meta-analysis is a serious problem. But I also find it hard to see how meta-analysis could be considered undervalued in a post-NCEAS world where data-sharing is mandatory and meta-analyses get widely cited. Hot take: the main effect of Lindenmayer & Likens’ high-profile but groundless complaint about “data parasites” was not to convince anyone–hardly anyone agrees with it–but rather was to make meta-analysts feel threatened and undervalued. Even though they’re not undervalued, except by a minority so small it’s not worth worrying about. The moral of this little just-so story is not that meta-analysts are over-sensitive. But just because somebody famous says something in a prominent venue doesn’t mean many others–or even a few others!–silently agree. I went through a similar experience myself back in the mid-90s, worrying that lots of ecologists didn’t like microcosm research just because a few prominent people didn’t like it. We’ve talked before about other ideas in ecology and academia that are controversial–or rather, that appear controversial–just because of a few people voicing minority opinions, whether in prominent venues or on social media. The point here is not that the minority opinion on topic X is necessarily wrong–or right! The point is just that you can’t tell what the majority opinion is by noting what a few vocal people think.
- According to respondents, the most controversial purported problem in ecological research is overvaluing theory compared to data. This was one of only three problems on the list for which the average opinion was intermediate (in between “no/minor problem” and “moderate problem”), the variance of opinion was high, and there was appreciable support for both “serious problem” (19%) and “opposite of a problem” (13%). There was also considerable disagreement about the purported problems “technology-driven, hammer-in-search-of-a-nail research” and “overvaluing generality relative to case studies”. But for both of those the mean opinion was fairly close to “no/minor problem” so I don’t consider those purported problems quite as controversial. (aside: personally, I don’t think generality and case studies are alternatives) There was also a high variance of opinion about over-valuing meta-analysis. But that was variance between the large majority who consider it no problem or the opposite of a problem, and a small minority who consider it a serious problem.
- Our readers don’t necessarily agree with us. So, back to that point about prominent people voicing minority opinions… 🙂 You might think that readers of this blog tend to agree with Meghan, Brian, and I. And at some level I’m sure that’s true. But it’s not as true as you might think, as this poll illustrates. Personally, I think mistakenly inferring causation from correlation is one of the most serious problems in ecology, and I’ve said so in multiple posts (example)–but respondents mostly disagree with me. As noted above, I don’t think natural history is undervalued in ecological research, but respondents mostly disagree with me. I’m (in)famous for banging on about zombie ideas (though I actually think they’re only a moderate problem in ecology as a whole). But respondents don’t think they’re a particularly serious problem. I was surprised how many respondents thought that inefficient theory was a moderate or serious problem, because I don’t think it is. Nor do all that many respondents agree with Brian and I that weak “pseudo-hypotheses” are a serious problem in ecological research. Respondents do mostly agree with Brian that statistical machismo and lack of good predictions are at least moderate problems in ecological research. And respondents do mostly agree with me that over-valuing lab/microcosm/mesocosm studies over field studies is not a problem. So remember: just because Brian, Meghan, or I write something doesn’t mean most–or even many!–ecologists agree.
Career stage doesn’t really predict ecologists’ opinions about purported problems in ecological research, with the caveat that sample sizes are small. More senior ecologists are less likely to think “irrelevance of ecological research to conservation/global change” is a serious problem (mean answers of 0.64, 0.4, and 0.02 for grad students, postdocs, and faculty, respectively). But given the number of respondents and the number of purported problems surveyed, you might well expect such a fairly modest association between mean opinion and career stage to occur somewhere in the dataset just by chance.
There were only a few purported problems on which the views of non-academic professional ecologists might be out of line with those of academic ecologists (again, remember the caveats re: sample size and multiple comparisons here). One was bandwagons. Academic ecologists (grad students+postdocs+faculty) think that bandwagons are a moderately serious problem on average (mean opinion 0.91), whereas non-academic professional ecologists don’t think they’re much of a problem (mean opinion 0.36). At the risk of HARKing :-), I bet that’s a real difference, not a blip. I bet the research foci of government ecologists, NGO ecologists, environmental consultants, etc. are less likely to reflect bandwagon-jumping, explaining why they tend to see bandwagons as less of a problem. Non-academic ecologists also seem to think HARKing/p-hacking/etc. are less serious problems on average than academic ecologists do (mean opinions of 0.8 vs. 1.3, respectively). But that might just be a blip.
There were several purported problems on which views seem to vary substantially between fundamental vs. applied researchers (again, sample size and multiple comparisons caveats apply):
- The clearest-cut case, which confirms my priors and therefore is obviously real :-), concerns whether “overvaluing theory over data” is a problem. On average, applied ecologists think it’s a moderate problem (mean 0.92), fundamental ecologists think it isn’t much of a problem (mean 0.28), and ecologists who do both fundamental and applied work are in the middle (mean 0.45). This result is so expected it’s boring, but it’s reassuring to see because it suggests the poll is picking up some real effects.
- Applied ecologists collectively see research that’s irrelevant to conservation/global change as close to a moderately serious problem (mean 0.79). Other ecologists do not (mean 0.21). Again, no surprise there.
- Applied ecologists collectively see overvaluing generality over case studies to be something approaching a moderate problem (mean 0.63). Fundamental ecologists think it’s a non-problem (mean -0.09), and ecologists who do both are intermediate (mean 0.36). Again, no surprise there.
- Applied ecologists collectively see an overemphasis on novelty as a more serious problem in ecological research (mean 1.46) than do fundamental researchers (mean 1.06) or those who do both (mean 1.13). Yawn.
- But here’s something I didn’t expect: applied ecologists are less concerned about lack of good predictions (mean 0.5) than are fundamental ecologists (mean 1) or ecologists who do both (mean 0.79). Does that surprise you? It surprised me, given that one of the reasons everybody always gives for why ecology needs more/better predictions is “to inform management decisions“.
The research approaches ecologists use also predict their opinions about purported problems in ecological research, in depressingly predictable ways. On average, ecologists tend to think their own approaches are fine and it’s other people’s approaches that are the problem. 😦 For instance, those who don’t do mathematical modeling/theory tend to think that theory is overvalued relative to data, that generality is overvalued relative to case studies, that natural history is insufficiently valued, and that too much ecological research is irrelevant to conservation/global change. Those who do mathematical modeling/theory don’t tend to think those things. Those who don’t use advanced statistical methods tend to think statistical machismo is a fairly serious problem. Those who use advanced statistical methods think it’s no problem or a minor problem. Those who compile/synthesize data collected by others tend not to see overvaluing of meta-analysis and overvaluing of generality as problems. Those who don’t compile/synthesize data collected by others are more likely to do so, though the differences aren’t large. On the other hand, I was surprised that people who do lab/microcosm/mesocosm work, and those who don’t, agree there’s no problem with overvaluing lab/microcosm/mesocosm work in ecological research. And opinion on whether undervaluing of small scale field experiments is a problem isn’t divided by whether or not the respondent does small scale field experiments.
I’m still mulling over what, if anything, these poll results imply for the whole genre of “critiques of ecology”. Because make no mistake, it is a genre. Ecology is somewhat infamous among both ecologists and non-ecologists for being a messy field wracked by disagreements about basic matters, extending even to the very definition of the field itself. But one thing that strikes me about this poll is that there’s no purported problem with ecological research that’s thought to be a serious problem by a majority of respondents, and only a few that are even thought to be moderate problems on average. Rather, almost everybody thinks there are some serious problems with ecological research–but nobody can agree on what those serious problems are. Is that a bad sign for the field? (“We can’t even agree on what the serious problems with ecological research are, never mind how to solve them!”) Or a good sign? (“Nobody can agree on what the serious problems in ecological research are, because it doesn’t have any serious problems. Every ecologist just has their own random bee in their bonnet.”)
I continue to be a bit depressed that ecologists’ opinions on the serious problems with ecological research are correlated so predictably with their own research approaches. Ok, not everybody thinks their own research approaches are undervalued and everyone else’s are overvalued. But enough people do that it bums me out a bit. It’s a recipe for a self-sustaining feedback loop of mutual incomprehension and rivalry between ecologists who use different approaches. I suspect it’s fueled in part by competition for scarce grant funding and space in leading journals. Everybody needs to go read this old post of Brian’s and this old post of Meghan’s, and quit thinking that everybody who doesn’t do the sort of ecology you do is Doing It Wrong and hogging all the money and publication slots.
What do you think? Looking forward to your comments, as always.
I’m quite surprised that “too much H0 testing” isn’t seen as more of a problem. Probably, I was subconsciously overgeneralizing from what I happen to read on Gelman’s blog.
I agree with the crowd that pseudoreplication isn’t *that* much of a problem any more. Thanks to widespread adoption of hierarchical mixed models.
I strongly disagree on that. I think pseudoreplication is ubiquitous but usually remains undetected (genetic relatedness of individuals, temporal and spatial autocorrelation), so we get many more false-positive findings (irreproducible findings) than we would if all data points were fully independent. Unfortunately, there is selection in favor of ignorance on the side of authors (benefit of having statistically ‘significant’ findings to report) and, based on my own experiences, very few referees are able to spot even the most obvious problems with pseudoreplication (for examples see doi: 10.1111/brv.12315).
Thanks for the link, will be interested to have a look.
The figure is a really nice way to present things. I wonder what the null model is (if you have people rank things on a scale from 1-5 with 3 being in the middle, then variance has to go down if the mean is close to an extreme).
I mostly agree with the overall view. A few that I disagree with…
Like you I think H0 testing is a bigger problem. It is literally the least informative thing we could do and still be doing science. Testing predictions of models (in a sincere and strong way), testing multiple predictions, testing regression surfaces (linear or non-linear) are all much stronger.
And I think “overvaluing novelty” might be less of a problem. For sure I myself have at times complained that NSF tends to want to fund new things and not maintain existing things but the “existing things” are either infrastructure (NCEAS, databases) or long term data. But leaving those cases aside, shouldn’t we as scientists want to do new stuff?
Combining the last two might leave an impression that ecologists just want to be left alone to do what they want and not evaluated on how important their science is.
I’m not sure “insufficient natural history” is as big a problem as people made out either. Not because I don’t think its important, but because I think ecology has a field still has an awful lot of natural history (even if its not what gets a paper in a high impact journal, most ecologists got into the discipline because they love their organisms).
And I’m not convinced HARKing/etc is that big a problem. Does it happen a lot? Yes. Does it lead to a lot of papers. Yes. But does it lead to a lot of high profile papers that redirect the field (possibly mistakenly) – not so sure it does. Papers that get into good journals and cause other people to move to work in a field tend to be precisely those papers that have signatures suggesting not HARKing (e.g. data coupled with a model, large datasets that consistently give similar results when subsetted or analyzed different ways, multiple consistent tests, etc)
Just my two cents. Its very interesting to see what people think.
Very interesting remark that the small fraction of field-moving papers in ecology tend not to engage in HARKing/p-hacking/garden of forking paths/etc. Will have to mull that over. If that’s right, that would make me downgrade my view of the seriousness of HARKing/p-hacking/etc.
Implicit in your comment is a very interesting point of view on what drives the progress of science. You suggest that we should worry primarily about the quality of the very best, most influential work in the field. That if the “right tail” is in good shape, we don’t need to worry too much about the rest of the distribution. I need to mull that over too!
Yes – I think that is implicit.
In social pscyhology you start with a group of people of broad interest (gay or female in some better known recently critiqued examples) find almost any generalization about the group p<0.05 and it is newsworthy (and undergrad textbook worthy). And newspapers are not famous for the ability to scrutinize the weight of evidence. Which makes p-hacking a big problem there.
Without that human angle, I think ecology is a bit more robust – I claim it still takes good science to get a high profile result.
The one exception that might prove the rule is papers confirming that humans are obliterating the environment seem to not require high evidence to get a high profile.
On reflection, I have argued analogously in the comments here in the past regarding the importance of scientific misconduct (plagiarism, faking data, etc.). Papers based on misconduct are not only rare in an absolute sense (even if you allow for an implausibly-high rate of undetected misconduct), they’re disproportionately concentrated in obscure journals almost nobody reads. One can certainly name a few specific, high-profile research topics for which misconduct threw all research on the topic off the rails (e.g., vaccines and autism). But they’re the exceptions. So I think it’s hard to argue that misconduct is a big systemic problem for science as a whole. Especially if you think (plausibly) that the rare high profile papers matter much more than others.
Not sure I agree about the fake data / major academic misconduct comment. Yes, it’s extremely rare, but it can throw off the field and it can be in high profile journals and it can damage the public’s view of scientists as credible sources of information. I think the later is particularly problematic when the topic has political ramifications. The micro-plastics fraudulent paper in science is a big example. Climate change deniers and anti-environmentalists used this misconduct to discredit science as an entire discipline looking to advance a liberal political agenda. Although, it is uncertain how much of an effect this has (the people swayed by such arguments may have already made up their minds).
I don’t think climate change deniers or anti-environmentalists need misconduct to seize on. If there was no misconduct, they’d just seize on something else, or invent purported misconduct where none actually exists.
In terms of public trust in science and scientists, in US polls people say they trust scientists more than any other group or institution except the US military. I haven’t seen any evidence that there’s a generalized crisis of trust in science or scientists. Much less a crisis generated by misconduct.
“I wonder what the null model is (if you have people rank things on a scale from 1-5 with 3 being in the middle, then variance has to go down if the mean is close to an extreme).”
Yes, an extreme mean has to be associated with a small variance in this sort of data. But none of the points in the graph are anywhere close to bumping up against the bounds of what’s mathematically possible. So I don’t think the overall shape of the cloud of points in the graph reflects the shape of the space of mathematical possibilities.
For instance, I think the maximum possible variance for a purported problem would be if half the respondents said “opposite of a problem” and the other half said “serious problem”. Which, for 115 respondents, would give you a variance of about 2.27 with the numerical response scale used in the post. No purported problem had a variance anywhere near that, even those with a mean response near zero.
“But one thing that strikes me about this poll is that there’s no purported problem with ecological research that’s thought to be a serious problem by a majority of respondents, ”
This doesn’t seem surprising. People who think ecology has serious problems don’t become ecologists or they leave the field.
Hmm, I’m not so sure about that. For instance, Robert Peters wrote a whole booked called “A Critique For Ecology”, and did not follow it up by leaving the field. More broadly, most of the respondents to this poll think there are at least a couple of serious problems in ecological research (one thinks there are at least 17!). And yet they remain ecologists. Perhaps in part because thinking that there are some serious problems in ecological research isn’t mutually exclusive with thinking there are also some great strength. I don’t think you can infer respondents opinions on the overall state of ecological research from their responses to this poll.
“Now I’m kicking myself for not including some non-problems on the list as controls.”
🙂 Yes, and perhaps another category on the seriousness scale that corresponds to “detrimental / fraudulent” just to put a cap on the opposite end of the spectrum, so people can contextualize “serious” and “moderate”.
Interesting way to investigate what’s going on in the science, for sure, would be interesting to see a pole like this for other disciplines.
“Every ecologist just has their own random bee in their bonnet.”
Put a little birdhouse in your soul….
+1 for the TMBG reference
I’m probably going to get slammed for this, but I’ll put it out there anyway. Something I see as an issue for ecology (but other fields too), is the over-reliance nowdays on poorly QAQC’ed statistical methods. In other words, too many people using the wikipedia of statistics, R, without ensuring the code does what is says it does because the code was just copied from some other site. I’ve seen it in theses, in ms reviews, etc. You can get R code from practically anywhere. Documentation of that code and how it has been checked is usually not something that is well reported. It is something I comment on in reviews though.
“I’m probably going to get slammed for saying this”
Oh, Brian for one is with you 100% on that, I believe! So if you get slammed, you won’t get slammed alone. 🙂
Yep – I know R packages in CRAN that might as well be random number generators. Caveat emptor in the R world.
And the number of people who continue to make continuous variables a random effect without realizing the monstrosity that R does to obey that request confirms that a lot of people don’t really know what they’re doing even when they use a reputable package.
Interesting set of results, indeed. (I did not participate in the poll, sorry.) I have a question. One of the poll questions was “Mathematical models of specific systems overvalued compared to general theory”. But in the graph (mean ~0.4, variance ~0.5) that seems to have morphed into “overvaluing theory over models”. Is that a mistake, or a different question?
Good catch, it’s a mistake, I’ll fix the labeling on the graph.
“Rather, almost everybody thinks there are some serious problems with ecological research–but nobody can agree on what those serious problems are.”
So ecologists are just human after all. 🙂
RE: the fact that many people thought their own research areas were undervalued.
I don’t think it’s that surprising, or depressing. Ecology has such a diverse range of fields & approaches, it’s perfectly logical for people to choose to work in a particular area they value and think is important. For example, I choose to work on insects because I think they’re overlooked & undervalued compared to vertebrates, but it doesn’t necessarily mean I don’t think vertebrates are important. But I agree with your point about competition etc. Of course I get annoyed when vertebrates get more attention & funding relative to invertebrates, so that probably influences my answer to a question about over/undervaluing fields.
Also what about how different people define terms? e.g. natural history & meta-analysis are two off the top of my head that I have seen people use to mean slightly different things. Even if people’s personal interpretation of a term is inaccurate, it will probably have influenced their answers.
“RE: the fact that many people thought their own research areas were undervalued.
I don’t think it’s that surprising, or depressing. ”
Oh, it’s not surprising. Indeed, it’s hard to see how it could be any other way. I mean, you do whatever it is you do, so surely you like it!
“Also what about how different people define terms? ”
That could be one source of disagreement, but just offhand I doubt it’s a major source. But if you’re right, that would prove that vague/unclear terms are indeed the most serious problem in ecology. 🙂
Pingback: Pick and Mix 32 – something for everyone? | Don't Forget the Roundabouts