The Paper That Ecology Rejected That Later Won the Mercer Award

There are a few “story behind the paper” style posts that I have in mind, and this one is the first of them. For this one, I’m going to focus on a paper that I wrote in collaboration with Spencer Hall. The hook for this story, as the title indicates, is that it’s a paper that we originally submitted to Ecology, where it was rejected. We then submitted it to AmNat, where it was accepted. That AmNat paper is what later won the Mercer Award. This sounds like a classic case of “Those stupid reviewers got it wrong,” right? But it’s not. To me, this is a story of a success of peer review. The paper that ended up getting published is much, much better than the one we originally submitted to Ecology. That paper would not have won an award. So, to me, this story actually shows the value of peer review. It’s part of why I would be pretty hesitant to go with the “no revisions” option Jeremy posted about recently.

But first, let’s back up. A quick overview of the paper: It is based on a chapter from my dissertation. In fact, it’s based on the chapter of my dissertation that I struggled with writing the most, and that I really came to hate. (Tip for the grad students: arrange things so that the chapter you hate the most is NOT the first chapter you discuss at your defense. It was really stupid of me not to move this chapter to the end, just so I could get on a roll with the other chapters first.) The key question of the dissertation chapter and the Ecology submission was asking how predation influences parasite prevalence and host population dynamics. The paper combined a bunch of different things: characterization of the selectivity of fish predation on Daphnia infected with two different parasites; studies on individual-level effects of those two parasites; detailed population-dynamical studies on 5 lake populations, carried out during epidemics of those parasites; and an epidemiological model trying to link all the empirical data together. It was a whole lot of field work in one paper, and I had completely exhausted myself collecting it. The dynamical sampling involved sampling lakes just about daily (there actually were more lakes that I sampled, but that ended up needing to be dropped for various reasons — including, in one case, the addition of a massive amount of copper sulfate to the lake during the dynamical study). And, on top of that, I needed to go fishing at dawn to get the fish selectivity, and do Schindler series at day and night to get data on habitat use so I could figure out water temperatures to use for egg development time for the dynamics data. And all the samples need to be counted live (we can’t see the parasites in preserved samples), which didn’t leave a whole lot of time for sleep. All of which is to say: I had a whole lot invested in this paper. I should also say that part of why I could do all that field work is my father was my field assistant for this study — he was a really fantastic field assistant, and I highly recommend exploiting family members whenever possible. 😉

The big problem for this paper in review was the modeling component: in short, the model did a bad job of capturing the dynamics of the system. It predicted that the parasite should become endemic (that is, that it should persist indefinitely in the host population). In nature, we always see epidemics, where infections go from rare to common to rare in a fairly short period of time. So, we were hinging our whole explanation of things on a model that did a really bad job of describing the system. In our defense, we acknowledged this, and only analyzed the invasion of the parasite, where the model did a good job. But, still, the model wasn’t a good one for our system. So, after two rounds of review at Ecology, the paper was rejected, in large part because of the model.

It became clear, then, that we needed to do something to make the model better. By this point, I had already developed the evolutionary epidemiological model that was the focus of my 2007 paper with Lena Sivars-Becker. That model captures the dynamics of our system quite nicely. So, we decided that we needed to combine the model in the original paper that added selective predation to an epidemiological model with that evolutionary epidemiological model from the Duffy & Sivars-Becker paper. In our initial submission to AmNat, we only modeled one of the two parasites, because we only had data on genetic variation in susceptibility to that one parasite. The reviews came back mostly positive, but really wanting us to use the model to compare the two parasites. So, we revised the paper yet again to add the second parasite species in to the modeling component of the paper. This allowed us to understand the joint effects of selective predation and rapid evolution on the impacts that these two parasites have on the host population, and helped explain the different impacts we had observed these two parasites having in our natural populations. And that is a really, really important part of the paper. But I wouldn’t have gotten there without a whole bunch of prodding from reviewers and Associate Editors (especially Yannis Michalakis at AmNat).

So, as I said at the beginning, while it’s fun to say that the paper that won a major award from ESA was rejected by their society journal, it probably deserved it.

27 thoughts on “The Paper That Ecology Rejected That Later Won the Mercer Award

  1. Good to be reminded that pre-publication peer review often works. And not just for catching out and out errors, but for pushing authors to make their papers better in all sorts of ways. And that rejection and heavy revision is the normal course of events, it happens to everyone, and it doesn’t mean you’re a bad scientist or that your paper sucks.

    Unfortunately, the timing of your post, so close to my birthday, has reminded me that I am now officially too old to win the Mercer award. 😦

    • Sorry for the timing of the post!

      You said that this story helps show that “rejection and heavy revision is the normal course of events, it happens to everyone, and it doesn’t mean you’re a bad scientist or that your paper sucks.” I totally agree. This was definitely hard to learn, though. Those first few rejections hurt a whole lot. My theory is that, early on, so much of one’s science identity is tied up in each individual paper, that it’s much harder not to take it personally. Now, while I still don’t enjoy having papers rejected, I no longer take it personally.

      • That’s a very good point about rejection being harder to take when you don’t already have a number of published papers to your name. That’s something that too-old-for-the-Mercer-Award guys like me sometimes forget. 😉

  2. This is a great story. I remember admiring the paper when it came out (it just seem like every corner was squared), and in hindsight it doesn’t shock me that reviewers provided some of that push to go beyond. And in general, I totally agree that the review process can greatly improve papers. As an associate editor, I kind of have to believe that – I spend a lot of time trying to improve papers – not just be a gate keeper.

    It raises a really interesting question though. Because there are a lot of reviews out there that are low value or even make a paper worse. This will always be true, but I wonder if there is not a way to improve the ratio of value added over value detracted reviews.

    It would be great if there were some sort of global reviewer reputation system (that still maintained anonymity on specific reviewers), but I don’t think that will happen in my life time. I suppose much of it falls back on the editors who need to stop behaving like the overworked people they are?

    • Yeah, part of why I think Yannis deserved special mention is that, as an AE, he provided a thorough review, and also gave feedback on which parts of the peer reviews to really focus on addressing. It was really thoughtful feedback, which I think is common from AEs at AmNat, where the AE serves essentially as another reviewer.

      I wish I knew how to improve the overall quality of reviews. I’ve actually wondered if individual journals have a way of rating reviewers. It seems like it would be hard to have a global database of this, but that journals could do this on their own.

      • Re: AEs at Am Nat acting as additional reviewers, that’s often the way I acted at Oikos. I’m not sure if that was unusual.

        I will note that Am Nat is the journal that, if memory serves, rejected Fox et al. 2010 without external review. We added one minor appendix, tweaked a word here and there, and sent it to Ecology, where it sailed through with the best reviews I’ve ever gotten in my life. It was that experience that prodded me to start thinking seriously about reforming the peer review system and eventually led to the PubCreds idea. Rejection without review has its place, but in my experience selective journals are coming to rely on it far too much, and in a much wider range of circumstances than they will admit publicly. PubCreds was an attempt to create a world in which Fox et al. 2010 at least gets a fair hearing at Am Nat.

        I don’t know that I’d make anything of this–it’s just one anecdote among many. But since we were talking about Am Nat, I figured I’d throw it out there.

    • “a lot of reviews out there are of low quality”

      When I was at Oikos, I found the overall quality of reviews to be good. Most of them were helpful, to me and to the authors. But I had no hesitation at all about disagreeing with reviewers when necessary. In my 6 years of editing experience, this was never (with one possible exception) a matter of reviewers being sloppy or biased. It was a matter of me not agreeing with their judgements as to what was wrong with the ms, and what (if anything) would make the ms better. The way I’d deal with such situations would be to give the authors detailed guidance in my cover letter as to what revisions I wanted to see, including suggestions on how to respond to any criticisms of the reviewers with which I disagreed. There were a few times when I did this even when I disagreed with all the reviewers (although more commonly, I was taking the side of one reviewer against another).

      If editors were merely there to count the votes of the reviewers, there’d basically be no need for editors–online ms handling systems could just tally up the reviewers’ votes and auto-generate decisions and decision letters. As an editor, you’re entrusted with decision-making power, and your responsibility is to make the best decision you can. You do that by taking the reviewers’ advice seriously–but not by following it slavishly.

      Re: reputation systems for peer reviewers, this is something Peerage of Science is trying to produce. But of course, that’s a members-only thing that’s just getting off the ground. And of course, most journals keep their own databases.

    • Jeremy – I agree with you that an AE *should* take a thoughtful non-vote counting approach. Many do, some don’t. I would also agree with both you & Meg that in my experience both AmNat & Oikos tend to have above average editors (and reviewers). In my experience this may be inversely correlated with the obsession of turning reviews around in a short fixed amount of time – not surprising emphasizing speed is a different emphasis than quality of review process. Cetainly when I AE, I try to be thoughtful (but I know I am not on any editor’s list for fastest turnaround either)

      I also agree Jeremy that reject without review is used far too much these days. I try to avoid it as an AE except when its really obvious.

      And Meg – I don’t know if your question about rating reviewers within a journal got answered. But 2 of the 3 journals I AE let the AE rate the reviewer. In my experience this system is of only modest use because it requires AEs to fill them out regularly and thoughtfully (which is often one too many tasks) and because the challenge of finding reviewers with the right background often doesn’t allow one to be choosy any more.

      • Thanks for the additional info. I guess the problem of AEs already being overburdened makes it even less likely that a global database of reviewer quality will be developed.

        I wonder how much of an impact asking for a shorter time-to-review has on the quality of reviews. I’m sure there are some cases where someone needs a longer time to find time to do a review really thoughtfully. But, I also think there are a lot of people who don’t think about doing reviews until pretty close to the deadline for submission.

  3. meg!
    awesome post. i actually have an incredibly similar story, although in the opposite direction (rejected am nat, accepted ecology), and i doubt it will win me the mercer, but it improved IMMENSELY after the reviews-upon-rejection from am nat. it got even better with a round of reviews at ecology, so i was kinda lucky in getting two really good groups of reviewers (and ae’s).

    for all the negative experiences out there in peer review land [and there are many], there are also tons of incredibly positive and progressive rejection reviews. it just sucks that it’s kind of a crapshoot, at least in my experience.

    also, i love that your dad was your field assistant.

    cheers!
    –joe

    • That’s great to hear that you had a similar experience! It’s definitely disappointing when you get peer reviews that aren’t thoughtful, but, in my experience, I think I’ve had many more thoughtful ones.

      And, yeah, my father was pretty much the ideal field assistant. Super careful about data collection, and then he’d go home and walk the dog and make dinner. What more could you ask for? 😉

  4. Pingback: Friday links: is it better for your paper to get rejected before being published, and more | Dynamic Ecology

  5. I agree that peer reviewers can make a huge impact on turning ordinary papers into seminal papers. And reputation and reward system to encourage this (and punish for sloppy work) is really needed, but it needs to be wider than just one journal’s or even publisher’s own (often secret) databases.

    About Peerage of Science being members-only: Actually, any scientist can join simply by submitting a manuscript. Proof that one is “a scientist” there is a) earlier publications as first or corresponding author in reputable peer reviewed journals, or b) if the submitted manuscript is author’s first, then successful peer review of this very manuscript submitted to the service.

    Quoting submission instructions from the Peerage of Science website:

    “All authors who have previously published one or more scientific articles as first or corresponding author in reputable international peer reviewed journals, will receive invitation to join Peerage of Science, and can then in turn invite other co-authors themselves.”

  6. Pingback: The study that almost made me quit grad school | Dynamic Ecology

  7. Pingback: Advice: why should an academic read blogs? | Dynamic Ecology

  8. Pingback: Friday links: rejected classic papers, great interview with Peter Kareiva, crowdfunding=bake sale, and more | Dynamic Ecology

  9. Pingback: A Few Early Friday Links: Pseudonyms, Single Mom Postdocs, and #moreinvolved | Dynamic Ecology

  10. Pingback: Meu paper foi rejeitado: e agora? | Sobrevivendo na Ciência

  11. Pingback: Is it worse to admit a paper was rejected than to not acknowledge helpful anonymous reviews? | Dynamic Ecology

  12. Pingback: There is crying in science. That’s okay. | Dynamic Ecology

  13. Pingback: Os cinco traços de personalidade cruciais para se tornar um cientista profissional | Sobrevivendo na Ciência

  14. Pingback: Friday links: women-only faculty positions, chatbot TA, and more | Dynamic Ecology

  15. Pingback: What papers should be considered for the 2017 George Mercer Award? | Dynamic Ecology

  16. Pingback: Last and corresponding authorship in ecology: a series of blog posts turns into a paper | Dynamic Ecology

  17. Pingback: What papers should be considered for the 2018 George Mercer Award? Nominate someone! | Dynamic Ecology

  18. Pingback: O que é um preprint? – Sobrevivendo na Ciência

Leave a Comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.