Poll: which purported problems with ecological research are actually problems?

We talk a lot about critiques of ecology around here. Problems in the day-to-day practice of ecology that are sufficiently serious to be worth recognizing and addressing. Maybe even so serious that the entire field could be said to be in crisis.*

Or at least, they’re problems that somebody thinks are serious enough to be worth recognizing and addressing.** But maybe that somebody is wrong! Sometimes somebody just has a bee in the bonnet about something that’s not a problem, or not a big problem, or even the opposite of a problem.

Hence this poll! Below is a list of a bunch of purported problems in ecological research. Then there’s a poll inviting you to share your opinion of the seriousness of each problem, in those areas of ecology with which you’re sufficiently familiar to answer. There are four possible answers: serious problem, moderate problem, no/minor problem, and opposite of a problem. For instance, if the purported problem was “not enough people read Dynamic Ecology” and you thought too many people read Dynamic Ecology, you’d choose “opposite of a problem”. If you thought the right number of people read Dynamic Ecology, you’d choose “no/minor problem”. #sillyexample

Problem list (links go to discussions of the purported problems):

  • Failure to make and test good predictions (link, link)
  • Hypothesis-free research, or research based on weak “pseudo-hypotheses” (link, scroll down to #7)
  • Statistical machismo: using over-complicated statistical methods without properly weighing the pros and cons (link) (note: even if you dislike the term “statistical machismo”, vote on the purported problem the term refers to, not the term itself)
  • Zombie ideas: ideas that continue to be widely-believed and taught despite strong reasons to think them false or otherwise seriously flawed (link) (note: even if you dislike the term “zombie ideas”, vote on the purported problem the term refers to, not the term itself)
  • Inefficient theory: theoretical models that have too many free parameters, relative to the number or range of phenomena they explain or predict (link)
  • Mathematical models of specific systems overvalued compared to general theory: overvaluing mathematical models of specific systems, compared to more general theoretical models that apply in an approximate way to many different systems (link)
  • Theory overvalued compared to data (too many possible links for me to pick one)
  • Generality overvalued compared to system- and site-specific case studies (link, link)
  • Meta-analysis overvalued compared to collecting one’s own data (link)
  • Lab/microcosm/mesocosm studies overvalued/misleading compared to field studies (link, link)
  • Undervaluing natural history; ecological research insufficiently grounded in natural history (link, link)
  • Too much research that’s irrelevant to conservation/global change, and/or that falsely claims relevance to conservation/global change (link, link)
  • Lack of replicability; too few attempts to replicate published studies (link)
  • Too much null hypothesis significance testing (link, link)
  • Bandwagons: choosing a research topic or approach based on its popularity rather than its own merits (link)
  • Technology-driven research: “hammer in search of a nail”; research using the latest technology at the cost of being flawed in other ways (link, scroll down to #9)
  • Inferring causation from correlation, or from other inadequate evidence (link)
  • Overemphasis on novelty (link)
  • Vague/unclear/non-operational terms and concepts (link, link, link)
  • Underpowered studies (link, link)
  • Pseudoreplication: treating non-independent observations as if they were independent (link, link)
  • Small scale field experiments undervalued, particularly compared to large scale observational studies (link)
  • HARKing, p-hacking, cherry-picking data, garden of forking paths: questionable research practices that increase the odds the data will appear to reject the null hypothesis, or appear to match the investigator’s preferred hypothesis (link, link)

*This post focuses on problems and perceived problems in ecological research, not in how subgroups of ecologists are treated, or in how ecology is taught, or how ecological research is funded, or etc. Those other things are important, they’re just not the focus of this post. So please do not respond to this post by tweeting or commenting “What about [problem beyond the scope of this post]?!” Find some other, private way to vent your annoyance that this post isn’t about whatever you wish it was about.

**It may surprise some of you to learn that, personally, I don’t think that the entire field of ecology is rife with massive problems; I just think it’s good mental exercise to occasionally consider that possibility. I think there are plenty of success stories in ecology. I think the field as a whole is progressing rather than regressing or spinning its wheels, but I find it hard to say if the field as a whole is progressing fast “enough”, or “as fast as possible”. And I think that rather than focusing on the overall state of the field, or whether that state is improving, it’s more useful to focus in a more granular way on identifying and addressing specific problems. Hence this post.

14 thoughts on “Poll: which purported problems with ecological research are actually problems?

  1. My biggest concern is actually outright fraudulent field studies. Rather than beginning the study with a neutral, information seeking intent “Let’s go out and do the following careful, well-designed data collection to see what’s going on.” the study begins, “Wouldn’t it be good for US (Defined as our PI, his prior publications and theories, our political movement or agenda) if we had a paper saying THIS? How can we do the least amount of work and get a paper like that through peer review?”
    So, the ‘research’ actually begins with a conclusion, and then becomes a potentially outright fraud in order to (badly) obtain the data that will enable that conclusion to be published.
    Field studies are particularly susceptible to this, because there is almost never an independent way to record the observations.
    For example. The intent is to show poor spawning of pikeminnows by netting fewer than normal larvae. It’s known but not well reported that the nets set at 2:00 am catch the most larval fish. Data is intentionally collected only at 8:00 am and noon. These counts are actually above the same timed counts from a prior year, but the total larvae per 24 hours is reduced from prior year. So the desired conclusion based on a biased observation is then reported, and an intentionally fraudulent publication is introduced into the public domain, usually with non-peer reviewed press conference release so as to push the political agenda to a general public who will never read the formal paper.

    • Sorry, you’re going to have to show me some data if you think that intentional fraud is widespread in ecological field studies.

      Retraction Watch’s database shows that only a *very* tiny fraction of all peer-reviewed papers are retracted for fraud. Like, so tiny that even if the true fraction of fraudulent papers was underestimated by several orders of magnitude due to undetected fraud, it’d *still* be tiny.

      Retraction Watch’s database also shows that a large fraction of all fraudulent papers are published by a small number of serial offenders. Most scientists are not fraudsters.

      Now, if your claim is that fraudulent studies are rare, but are a serious problem when they do occur, I think that’s a case one could make. There certainly are important research topics on which that’s true–Andrew Wakefield’s fake study linking vaccines to autism is the first one that comes to my mind. Within evolution, you can make a case that Anders Pape Moller deliberately skewed his results on fluctuating asymmetry, creating a bandwagon of interest in that topic that might not have existed otherwise and that resulted in a lot of wasted research effort. In social psychology, there was a quite prominent researcher a few years back who got exposed as a serial fraudster, and who seems to have been influential enough that his fraudulent results might have skewed the direction of his entire subfield. And there’s that prominent Cornell nutrition researcher who made his entire career out of ordering his trainees to p-hack in order to get headline-making results. But I think cases like that are pretty unusual when you look across science as a whole. And I can’t think of any topic within ecology where the entire direction of research has been skewed by deliberate fraud. If a bunch of ecologists working on global change or whatever are fraudsters who’ve all managed to go undetected for decades, they’re by far the most skilled fraudsters in human history! Which seems…implausible.

      What I think is more common in applied ecology is emphasizing certain aspects of one’s results and playing down other aspects, so that readers who only read your abstract or your conclusions come away with a rather different impression than they’d have gotten from reading the entire paper carefully. I’m thinking for instance of recent work we’ve linked to (sorry, can’t find the link just now) showing that abstracts of papers on the effects of habitat fragmentation on biodiversity systematically tend to report only those results showing that fragmentation is bad for biodiversity. Even though when you go through those papers and tally up all the results, fragmentation is good for biodiversity at least as often as it’s bad.

    • I’m concerned delineating this label of ‘fraudulence’ from ‘system knowledge’. Many patterns/processes are remarkably strong only in specific systems (so these features are common knowledge to people in a system and underappreciated/ignored by others). Recognizing such features typically requires deep familiarity with a system, prompting focused field studies to quantify these strong patterns/processes. I think such studies (1) are remarkably valuable for tying theory to field data in ecology and (2) often can only be done by dedicated scientists who think very carefully about the system. All the better if such work results from people specializing on a concept/idea collaborating with empiricists in specific systems.

    • The early results are already really interesting! But the sample size is small as yet, and it’ll need to be much bigger to make it worth breaking down the results by discipline/method used. So fill out the survey, everybody! 🙂

      • Sure. I am often afraid that in an environment where there is no mechanism, other than popular opinion, for determining what ideas are right, wrong or useful (no strong predictions expected, no replication, arguments that ‘basic research’ means there is no need for it to actually address phenomena in the natural world,etc), there is large personal upside and little downside for the publication of papers on what is ‘hot’ whether or not one thinks it might be true, useful or important. It seems to me that it might help explain why so many smart people torture language and data so badly, and why there is so little actual progress in our field*. I guess I am thinking of it as a mechanism for Bandwagons. To be clear, I don’t think one has to willingly publish what one knows is wrong to act in a careerist way; just not be overly concerned with whether it is actually right.**

        *I have no data that there has been little actual progress in our field, it *feels* true, though. (let’s call it a stylized fact)

        **I have no data on how prevalent careerism is, but it seems to explain some patterns and it is probably the best practical strategy for academic success in an environment with no mechanism for the quantitative assessment of idea. I hope it’s rare.

      • Ok, I’m with you now. Yes, what you describe is a mechanism that would generate bandwagons.

        I do think it’s fairly common for researchers to be attracted to “hot” topics. But I don’t think it’s because researchers feel like they need to publish on hot topics to further their careers, and don’t care much whether what they’re publishing is true/interesting/important. Rather, I think researchers who jump in and start publishing on hot topics genuinely think that they have something true/interesting/important to say about those topics. People (very much including me) often think they’ve spotted “low hanging fruit”–an opportunity for a quick paper. That is, a quick paper that they also see as a worthwhile paper. Not a quick paper that they don’t give a damn about one way or the other, but feel like they have to crank out to get that next grant/get a tenure-track job/get tenure. I think the other main reason hot topics tend to attract researchers is because new grad students who read the recent literature to learn what the big questions in the field and develop their own project ideas are naturally going to develop projects related to current hot topics. It’s the rare new grad student who’s going to read so broadly, and think so independently and deeply, as to come up with a good research project on a topic few others are working on.

        There’s probably a blog post to be written on what constitutes “careerism” and who engages in it or is seen by others to engage in it. My own sense is that cynical, naked careerism is rare-to-nonexistent in ecology. But there are incentives to engage in some practices that are “careerist-adjacent” or that might look careerist to some people. I’m thinking for instance of our old post on what constitutes “self-promotion” in science. One thing that came out in the comments on that post was widespread disagreement not only on what constitutes “self-promotion”, but on whether “self-promotion” is a bad thing or a good thing. I wonder if there’s similar disagreement about, say, getting together with your labmates/collaborators to write a perspectives/opinion/synthesis-type paper that argues for/promotes your own research approach, favorite question, or general view of the field. Those sorts of papers comprise a decent fraction of TREE papers. How many of those papers are seen by their authors as important field-shaping contributions, but are seen by many others as careerist self-promotion? I have no idea, but it would be interesting to find out.

  2. Risking ire, since you didn’t want us chiming in with new bees in bonnets, failing to publish complete, properly labeled, supporting data is a widespread problem. I do a lot of secondary analysis and have become unpleasantly adept at digitizing graphs. Dryad, figshare, and the like are low cost or free, and easy to use. Yet a lot of ecologists still publish like it’s 1990 as if the graph or summary table is the data.

    • Huh. All the journals I publish in oblige data sharing now. I obey the rules, so I tend to assume most everybody else does too. Is that not the case? Or do the journals you read not have data sharing rules?

      • I don’t think that’s a solid assumption. Granted, my readings are more often in the applied rather than core ecology journals, especially ecological effects of pollution. There, good data sharing authors are in the minority.. A couple of journal examples: ‘Freshwater Science’ doesn’t even mention a data sharing policy; and ‘Freshwater Biology’ only requires authors to state whether or not the data are available to share. As of 2015, Dominique Roche and others* found over half of the articles checked did not comply with their journal’s data policy.

        Maybe this would be a good topic for a poll and post: do authors embrace the concepts of open data and willingly make the effort to share it, or do they only bother if the funder or journal insists?

        *”Public Data Archiving in Ecology and Evolution: How Well Are We Doing?” e1002295. https://doi.org/10.1371/journal.pbio.1002295

        And apologies for the two week lapse in conversation 🙂

  3. RE: public access to data. This was a serious problem that is quickly going away but it is related to one of the issues in the poll (vague/unclear terms). Yes we need data posted to public servers for replication/ethical concerns, it should be SOP. However, I think in the long run this practice will be much more valuable for future analyses. Related, the widespread use of unclear/undefined terms makes it difficult to compare results in the literature. I have been writing an old school monograph on life history characters and a book chapter section (lots of literature review/synthesis) and too many papers (about a third is my guess) are rendered, at least in part, null because of the use of undefined terms and because of poor writing. This has also been a recurrent theme when reviewing papers. And now that I am aware of this issue, I am seeing it quite often in my general reading.

    There are a few related factors: since I am reading mostly single-species conservation management/life history papers, many of the studies I have reviewed are from graduate work and the writers are inexperienced. This issue seems to occur more commonly, but certainly not exclusively, in smaller journals (where life history studies are mostly relegated today). It also appears to me that this issue was improving until about 10-15 years ago (journals that require terms to be defined are awesome) but might be getting worse again (a long time editor may shed light on this, my impressions are only that). I do worry that today’s students who would rather watch a video than read a text will develop into poor writers.

    My point is that complete data sets will help to alleviate this. We need annotated raw data from the field/lab, too. Not only reduced, finalized data sets.

  4. Pingback: Poll results: what are the biggest problems with the conduct of ecological research? | Dynamic Ecology

Leave a Comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.