Is scientific misconduct especially rare in ecology and evolution?

Recent news about unusually serious scientific misconduct by psychologist Diederik Stapel (he made up the data in more than 30 papers; see coverage here and here) got me thinking about scientific misconduct, or the apparent lack thereof, in ecology and evolutionary biology.

Scientific misconduct is rare in general, but it seems to be especially rare in ecology and evolutionary biology. For instance, my quick perusal of Retraction Watch doesn’t reveal any retractions of ecology or evolutionary biology papers due to  scientific misconduct. And thinking further back in time (Retraction Watch hasn’t been operating for very long), the only cases that I can recall are those of Anders Pape Møller (a misconduct case which included retraction of an Oikos paper; see coverage here, here, here, here, and here), and the recent case of Stephen J. Gould apparently fudging data in order to be able to accuse someone else of fudging data (Lewis et al. 2011). It’s my impression, which I freely admit is not backed up by quantitative analysis, that misconduct is more frequently reported in other fields, and that the differences are larger than can be accounted for by differences in the number of people working in those fields.

Assuming purely for the sake of argument that my impression is correct, why might that be? I mean, it’s not as if ecologists and evolutionary biologists aren’t under pressure and so don’t engage in misconduct because they have no incentive to do so. Academics in all fields are under a lot of pressure to publish, get grants, etc. And it’s not as if ecologists and evolutionary biologists are more likely to get caught than researchers in other fields, thereby deterring them from engaging in misconduct. Indeed, ecologists and evolutionary biologists surely are less likely to get caught, because in contrast to, say, cell and molecular biologists, we only rarely attempt to truly replicate one another’s work, in part (but only in part) because true replication often is impossible (Palmer 2000). It’s failed attempts at replication that often lead to the discovery of fakery and fraud.

I’d like to think that ecologists and evolutionary biologists are, as a group, especially honest and ethical, even for scientists. But there’s a little part of me that worries that we’d never know if that weren’t true. Because detecting fraud is even harder in ecology and evolution than in other fields, and because potential fraudsters will know that and so will be less likely to be deterred by the possibility of being caught.

Honestly, I have no idea if this is something we ecologists and evolutionary biologists ought to worry about, or what we’d do differently if we were worried about it. It’s just a thought I had, that I thought I’d throw out there.

26 thoughts on “Is scientific misconduct especially rare in ecology and evolution?

  1. From memory, I would say that the late Mike Majerus used to make a good case arguing that Kettlewell might have fudged some of his data on peppered moths – Majerus successfully re-did all these experiments to counter creationist movements that were using the apparent misconduct of Kettlewell to oppose natural selection.

    In the same vein, Gregor Mendel’s data was found to be “too good to be true”. There are probably other historical examples, but since Kettlewell’s and Mendel’s experiments have been put in all textbooks, they are the ones that come to mind.

    • Interesting suggestions, albeit from a rather different era. I knew of, but had forgotten about, both.

      IIRC, there was an article in Genetics not too long ago correcting some errors in Fisher’s reanalysis of Mendel’s data, and concluding that there’s no evidence for fudging on Mendel’s part.

      I do recall that there was a lot of pushback against Majerus’ claims about Kettlewell, but didn’t follow the dispute closely enough to come to any firm view of my own.

      I vaguely recall that Lysenko and his followers engaged in various sorts of misconduct, but that’s another long-ago case of perhaps limited relevance today (even assuming I’m correctly remembering that there was misconduct)

  2. This is a very important topic for several reasons, and it *is* something we need to worry about, very much, IMO. Thanks for raising it, and for the link to Palmer (2000) which looks highly interesting. One first point though is that it’s necessary to agree on what misconduct exactly consists of in order to discuss it without undue miscommunication. I tend toward a very liberal definition that includes basically any analysis decisions made, data used, etc., that can’t be verified by others–any kind of dishonesty of any sort. Others favor a more narrow definition.

    I believe you basically nail the reason above: the problem is in the detecting. I seriously doubt that ecology has less misconduct than other disciplines. I don’t know if Palmer discusses this, but a high degree of sharing/openness of data has to be in place if you are going to really have people validating others’ work. We don’t have that in ecology. We have instead, an enormous body of literature that is largely based on the unique collection of data sets of particular systems at particular times and places, which very often do not get shared outside of the research group that collected them. This means that you can neither use them in meta-analyses, nor evaluate conclusions derived from them by the original research groups. There’s been some improvement in this area recently with things like LTER data pages, and journals demanding data sets and/or or code as a condition of publication, but even these data sets are rarely global in scope and the enterprise in general still falls far short of optimal.

    This issue is starkly clear when you compare climate science with ecology. In climate science, there are enormous, regional, hemispheric and global data sets of all kinds, collected, maintained and archived by large federal agencies, freely accessible to anyone, with defined and refined quality control and standardization/homogenization procedures in place, and in many cases freely available software also available to analyze them with. These include both empirical and modeled data of all kinds. You can’t even begin to wrap your mind around it. The contrast with ecology could hardly be more stark.

    And so here’s the issue. Or at least one issue.

    In spite of this fact, climate science, and climate scientists, have been under ferocious attack for at least a decade now, by those with a vested interest in planting doubt about the reality of anthropogenic climate change (AGW). Notwithstanding the validity and robustness and general openness and mutual reinforcement of many/most of these data, these attacks have been continuous and from all possible angles, often repeated over and over in a constant whack a mole game. Look for example at what is going on right now with Richard Muller and the BEST analysis.

    As these folks slowly come to the realization that the strength of the evidence of the physical climate change data is just too strong to dispute, they will be turning their attention to the estimates of climate change effects. Effects on agriculture, on ecosystems, on societal institutions, etc. And there they will find a grand field in which to challenge all kinds of research findings, because the data used in them have never been systematically maintained and archived and made freely and widely available to the wider world, and the conclusions arrived at have either never verified by anyone else, or the data used therein apply only to some limited location, or are only valid for a specific time and place, and a long list of similar types of issues that can be exploited.

    And the conclusions of those studies will absolutely be ripped to shreds in the public media circus arena in which these people and organizations operate. It will be like a bunny rabbit against a wolf.

    • These are excellent points, Jim. But I think there is a growing movement amongst eco-evolutionary biologists to make their data public.

      The NCEAS databases, the GPDD and Dryad services are all great initiatives that make reanalysis of existing data sets considerably easier than they’ve previously been.

      As Jeremy says, misconduct is so hard to spot. So, it’s hard to understand why EEBs would be less (or more) likely to do this than in other fields. Given the inherent financial limitations and pressures for novelty, and a general tendency for individual researchers/groups to monopolize particular study systems, the potential for catching misconduct is rather low.

      One unusual thing about Stapel is that he’s already been so open about what he did. Psychologists all primarily study humans, so one might suggest that they’re at a greater risk of being caught through independent replication.

      • Yes, thanks for those data examples Mike. There are also things like the USFS Forest Inventory and Analysis (FIA) and USDA NRCS data, which are well organized and maintained, growing collections of phenology data sets, and others. These are the fore-runners of what we need, for even more variables and at even larger spatial scales.

  3. I too wonder about undetected or, even more unfortunate, unreported misconduct. I know of one ecologist who was a bit of a rising star in his subdiscipline not so long ago that had a collaborator who became very suspicious. I’m not privy to the details, but I believe it involved the collaborator reporting it to the funding agency, and the scientist in question promptly quitting his tenure track position at an R1 institution and taking a job at a conservation organization. I believe at least several papers are potentially fraudulent, but I know of no retractions. It seems like it was a situation that was kept quiet, for reasons I don’t understand. Maybe in the absence of strong evidence, those in the know did not want to sully the man’s reputation? I know I don’t really spread it around, but that’s mostly because I don’t know the story well enough nor am I in a position as a graduate student to take chances like that. In any case, I wonder how many scientists, if they really thought about it, would be able to come up with some instances that seemed suspicious but which they never deemed worthy of making a fuss about.
    P.S. If you are going to discuss fraud in evolutionary biology, one can’t forget about Piltdown man…

    • I don’t want this discussion thread to descend into guessing games and innuendo, so I’m not going to speculate or comment on the situation you discuss (with which I am unfamiliar in any case). I encourage commenters to stick to cases in the public record, and to discussing general issues such as those raised by your comment (e.g., how do you decide what suspicions are serious enough to act on, and what’s the appropriate way for both individual and institutions such as employers and granting agencies to act on them?)

  4. Didn’t really intend to go off on such a tangent there, which had more to do with data sharing and openness than misconduct per se. The connection there is that you increase the chances of misconduct–or at least bad science–if data isn’t highly open and shared. Which it is largely not in ecology.

    • Although the thing about Stapels is that he *did* share his data–he’d make up a dataset and then hand it to a student or collaborator to analyze. I definitely agree that data sharing is a deterrent to falsification–it’s infamously hard to fake realistic-looking raw data–but apparently some data fudgers are undeterrable.

  5. We should definitely keep this discussion at a general level.
    I do agree that the number of retractions are surprisingly low in ecology (the only one I am aware of is the Pape Møller case reported above). In addition, the Oikos family journal Nordic journal of Botany had a retraction last year due to a dispute over authorship. But that journal is not really core ecology.

    As a manager of Oikos I have identified numerous cases of misconduct, but those cases have been handled and closed before publication, hence not been public. These relates to disputes over authorship, plagiarism, double publications, using other scientists data without acknowledge that fact etc.

    For most of these cases we have been alerted by sharp-eyed reviewers and editors. Without these, several of the cases would have been published and would possibly been future retractions.

    We may also keep in mind that the perception of what is scientific miscunduct varies among cultures. The present ‘western’ perception is not yet fully adopted by all.

    So are reviewers and editors in ecology more meticulous or are authors in ecology more honest?

    It is a very important discussion and I will enjoy following it.

  6. I do not think that irreproducibility is already misconduct, especially in ecology. Take a field test. All sorts of conditions can be different, when another researcher repeats a certain test. Different area, different year. In addition, a study might also yield results that just so happen to lie within the error-probability. Combined with the strictures of funding, some 5% of studies should have false results. Usually that is no problem as most publications never get cited (build upon). This is a sort of inbuild self-cleansing mechanism of science, IMHO.

    The claim of misconduct is usually associated with whistle blowers who happen to have close insight into the study. This might be different in lab studies, where one lab should be able to exactly reproduce the finding of another.

    Hence, ecologists might not be better than other scientists but ecological data might be noisier thus drowning more misconduct.

    P.S.: Retraction watch is a blog by Adam Marcus and Ivan Oransky with the aim of collecting and reporting on retractions. Until now they are mainly reporting retractions in medical research, but this might be due to their own training/interest and their readership/sources. Just now, however, there is something very eco/evo being discussed in a post from Nov. 3. (was that general enough?).

    • The recent report on Retraction Watch about a retracted paper on speciation is a pretty unusual case. Nominally, the retraction is for duplicate publication, but it sounds like the real reason is that the journal concerned was embarrassed by criticism of the paper and cast about for an excuse to un-accept it (the paper describes a truly bizarre hypothesis for the origin of butterflies, involving matings between distantly-related species instantly giving rise to new species).

  7. Yes, and neither “bizarre” nor “false” fall into the categories of fraudulent or unscientific.

    Moreover, retracting duplicated papers seems to be a publishing policy rather than a matter of misconduct. I remember an exactly cloned review paper having been published in two journals and neither retracted it.

    • “retracting duplicated papers seems to be a publishing policy rather than a matter of misconduct”

      It’s not just a publishing policy. Many employers and prospective employers would consider duplicate publication, without appropriate attribution, to be misconduct, albeit perhaps mild misconduct. “Self-plagiarism” is padding your cv, making your contributions to the scholarly literature look greater than they actually are. Obviously, this doesn’t mean that scientists shouldn’t ever repeat themselves, or at least paraphrase themselves–but they need to do so with proper attribution. I think duplicate publication is also a violation of copyright law if the publisher of the first version holds the copyright and hasn’t granted permission for re-publication.

  8. Just came about this:

    A new book by Robert Trivers, Brian G. Palestis, and Darine Zaatari
    The Anatomy of a Fraud: Symmetry and Dance (2009, TPZ Publishers).

    A blurb says:
    “A thorough reanalysis of […] “Dance reveals symmetry especially in young men” shows that all of the major results appear to be based on hidden procedures designed to produce the results later derived. These procedures include the pre-selection of animations of Jamaicans dancing, apparently based on preliminary evaluation in New Jersey, so as to exclude symmetrical individuals who danced poorly and asymmetrical ones who danced well (N = 10 out of 10, P < 0.001). There are also systematic biases in averaging dance evaluations so as to produce significant results where none exist and more highly significant ones than do, in fact, exist."

    I'm getting a hunch that the closer a research field is to humans and their interests the more fraud there is to be expected, regardless whether the research in question is medicine, ecology, evolution, or psychology.

    • Ok, this is strange. I went to grad school at Rutgers in New Jersey, where Trivers is a prof. At the time, he himself was working on a project on fluctuating asymmetry in Jamaican children. And sure enough, Trivers himself is actually one of the authors of the study which this book reanalyzes! (the study is here). So Trivers is (presumably) accusing one of his own co-authors of fraud. And in a book! There’s got to be more to this story, but a bit of googling turns up nothing…

      Irrelevant aside: the second author was a fellow grad student of mine at Rutgers (neither of us was in Trivers’ lab).

    • An email from someone who’s spoken to Bob Trivers about the situation provides a bit more detail. Turns out the data were bogus, and couldn’t be reproduced. Bob tried to get the paper retracted but couldn’t (I don’t have any info on why this didn’t happen; could be for various reasons), and so decided the best way to set the record straight was to write this little book (which I hear is pretty convincing, though I haven’t read it myself). He’s also given at least one public seminar on the situation. Seems like an admirably, and perhaps unusually, up-front response to the situation on his part.

      • Up-front okay, but unfortunately not open access.

        The shipping (international) would cost me more than the book.

        Will I buy it? No, haven’t been into the research in question, so I won’t, though it would be interesting.

  9. Pingback: Ecologists need to do a better job of prediction – part I – the insidious evils of ANOVA (UPDATED) | Dynamic Ecology

  10. Pingback: Friday links: Big Data or Pig Data? | Dynamic Ecology

  11. Pingback: Friday links: s**t students write, do big name scientists have too much money, and more | Dynamic Ecology

  12. Pingback: When, if ever, is it ok for a paper to gloss over or ignore criticisms of the authors’ approach? (UPDATED) | Dynamic Ecology

  13. Pingback: Friday links: a purposeful scientific life, zombie statistics, silly science acronyms, Tarantino vs. Plato, and more (UPDATED) | Dynamic Ecology

  14. Pingback: Scientific ethics discussions in labs | Dynamic Ecology

  15. Pingback: Friday links: how to spot nothing, Aaron Ellison vs. Malcolm Gladwell, and more | Dynamic Ecology

  16. Pingback: The history of retractions from ecology and evolution journals | Dynamic Ecology

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.