From Jeremy:
Calcagno et al. (in press at Science) looks to be a very interesting study, by some ace evolutionary biologists and ecologists, of scientific submission practices. They have lots of data on stuff that we all tend to have strong opinions on based on little more than our own anecdotal experience. Some interesting nuggets: 75% of published articles are submitted first to the journal that eventually publishes them, high impact journals publish proportionally more articles that previously were submitted to another journal, and papers that are rejected and resubmitted elsewhere attract significantly more citations than papers published in the author’s first-choice journal. As to whether that last conclusion implies that rejection and subsequent revision leads to better, and thus more highly cited, papers, maybe–but Cheap Talk argues maybe not. No word on whether rejected, resubmitted papers also are more likely to win the Mercer Award. 😉 (HT The Molecular Ecologist)
Steve Walker has a fun post on using “basis vectors” to locate different statistical philosophies in a 4D space. Yes, I did just use “fun” and “statistical philosophies” in the same sentence. He uses me as an example, and I think he places me pretty well: for me, τ=ε, ς>>0, β<<0, γ=ε. You’ll have to click through to find out what that means, and to locate yourself.
Remember how in one of my posts marking Peter Abrams’ retirement I praised him for killing off the idea of “ratio dependent predation” before it could establish itself? Well, apparently Peter’s work isn’t quite done. Don’t go yet, Peter!
From Meg:
I found this blog post on work-life balance and the number of hours professors work to be interesting. As a postdoc, I tried logging all my working hours for a while. It was really eye-opening. I was much less efficient than anticipated. I think that exercise helped make me more efficient, which is always good for work-life balance.
From the archives:
Why ignorance is bliss (sometimes).
The story of Jeremy’s first publication. Also the story of Jeremy’s first and (so far) last field experiment. It’s got everything! Rotifers! A huge storm! R* theory! Demographic stochasticity! Jeremy misciting himself–twice! And of course, a happy ending.
I’m curious, I met Lev a few years ago and he gave me a full explication of…well, everything, from the importance of lagged maternal effects to ratio dependence. It was interesting, and had some compelling reasoning behind it. Curious if you’ve read his quixotic and intriguing Ecological Orbits and what you think. It’s a fast read. I’ve always wondered if his ideas might gain wider traction, as they are fairly testable (although I admit I haven’t dipped to deeply into the Ratio-Dependent well myself).
I think “quixotic” is a good way to describe Lev’s obsession with ratio-dependence. I don’t really have anything to say that Peter Abrams didn’t say in print years ago.
There’s an interesting take on the Calcagno paper here. I don’t buy the conspiracy that he’s seeing because I think the Calcagno paper is a remarkable snapshot of how science publishing works, but he is right about the ‘rejection improves citation’ idea.
My feelings are the same as yours–the “rejection improves citation” effect is really small. But Casey’s interpretation of why the ms was published by Science really does amount to a pretty naive conspiracy theory. The decision-making process at a journal like Science or Nature may have its flaws, perhaps even serious ones–but those flaws don’t arise from some Machiavellian effort on the part of the editors to publish papers that will somehow manipulate author behavior! As you say, there are much more obvious, parsimonious, and plausible explanations for why Calcagno et al. was published in Science.
I’m not claiming that the paper was accepted in Science because of a conspiracy theory to manipulate scientists behavior; clearly there is a lot in this paper that merit its publication somewhere.
What I am arguing (i) that the stated effect is weak (which is obvious and is a point that has not yet been made clearly in the media) and (ii) that the media hype surrounding the paper, especially the Nature News piece, can be interpreted to be driven by a fairly obvious agenda – to get scientists to believe that rejection is good for us. Science itself is complicit in allowing a weak result to be published.
Bear in mind these magazines have agendas like every other corporate or bureaucratic entity – it would be naive to think they do not operate in their own self-interest and it is equally naive to think their interests are always aligned 100% with those of their constituency. You only have to look to the massive resistence both journals put up to making their content open access to see that they do not always put the pure interests of science above all else.
Jeremy, I put the case to you: do you think Science have been so keen to publish this result and Nature would have picked it up and hype it if the effect was equally as weak but in the opposite direction?
So on the one hand, you think there’s a lot in the paper that merits publication, and on the other hand you question whether Science would’ve published it if one particular result had been weak in the opposite direction? I’m confused. But to answer your question, yes, I can certainly see Science publishing this paper even if that one result had come out differently (either significant in the opposite direction, or not significant). And I can certainly see science news outlets, including Nature, picking up on it.
Lots of scientific papers get hyped. Especially those published in Science and Nature. And often (as recent studies comparing abstracts and press releases show) the ultimate source of that hype is the authors themselves (I’m making a general point here, not commenting on Calcagno et al. specifically). I take it you don’t like that, which is fine. But I’m sorry, I don’t see anything particularly unusual or nefarious about how this particular paper has been treated in the media, either at Nature or elsewhere. So I find the “interpretation” that Nature News is hyping this specific paper with the specific purpose of convincing scientists that rejection is good for us to be quite implausible. For one thing, scientists don’t need convincing–rigorous global surveys with sample sizes of thousands of scientists show that scientists believe that peer review improves their papers.
Yes, the publishers of Nature and Science have interests that they pursue, by doing things like hyping *all* their content, and by not giving their content away. And yes, those interests are *their* interests, not necessarily the interests of scientists, or science as a whole. But frankly and with respect, your story about what their interests are in this particular case, and how they’re putatively pursuing those interests is not at all plausible to me. I hope you recognize that there is a big gap between the general claim that “Nature and Science have their own interests and pursue them” and the quite specific claims you’ve made in this specific case. With respect, it seems to me that you’ve filled in that gap with pure speculation, not evidence.
I guess we differ on this issue in fairly fundamental ways. I do question whether Science would have published this paper without the “rejection improves impact” result. I wouldn’t be surprised if this was one of the major results that carried its acceptance, and without it I doubt the paper would have had such a wide readership (only the network and scientometric wonks would get excited – rather than every scientist). The fact the media picked up primarily on this message I take as good evidence for the fact this is actually the “headline” result of the paper, and I wouldn’t be surprised if the reviewers felt this way too. I do see that this paper (actually this particular result) received specifically uncritical attention in the media, especially since the headline “impact” result is not even the major or strongest result of the paper. Only Science Insider noted that the effect is weak, and all other outlets repeated the “significantly more citations” idea (as you do above). I do agree that peer review improves papers – but I disagree with your conflation of peer review and rejection being same thing. And with respect to evidence, neither you nor I have any more in this case. It is a matter of differing views on the sociology of science and publishing in high-impact journals. I have drawn up what I think is a plausible scenario for people to consider as an alternative to the one-sided message that is coming from the mainstream science media. You obviously don’t buy it, which is fine by me. My primary goal is to generate debate about what I have seen as a relatively uncritical discussion coming out of the mainstream media, which reinforces a viewpoint that I see underlies a lot of the problems in science and scientific publishing. At least now more than one perspective is on the table for people to swallow: blue pill or red?
Thanks for your further comments Casey.
No, I don’t have any more evidence than you as to why the paper was accepted, or why certain results have been highlighted in the media more than others. Which is why, in contrast to you, I’m not inclined to speculate on these matters one way or the other. I speculate when I have a reason to speculate. When I say that I think there were good reasons to accept the paper, independent of the weak result on rejection and citation rates, I am providing my own judgment about the paper, not speculating on the motives of the reviewers or the handling editor. Similarly, when I say that I don’t think this paper is unique or unusual in terms of how it was reported, I’m reporting my own judgment, based on what I know about this paper and how it has been reported as compared to about many other Science and Nature papers and how they were reported. I am not opposing my speculations to yours. Only one of us is speculating.
You say you want to generate debate. With respect, I suggest you may want to consider exactly what issues on which you wish to generate debate, and how to do that most effectively. I confess I remain quite confused about what you’re trying to do, but as best I can tell you seem to be trying to do (at least) two things at once. On the one hand, you are telling an admittedly-speculative story about why this particular paper was published, and why this particular result was hyped in what you claim is an unusually-uncritical manner by the media. On the other hand, you are concerned with this paper and media reaction to it as merely one instance of what you see as much wider problems in science and science publishing. I’m afraid I find this quite confusing, because those two things are almost contradictory. Either this paper is a unique case (in which case, why are you so worried about it, since it’s atypical?), or else it’s merely one instance of very general problems (in which case why not draw on a much broader range of evidence to highlight those problems?)
Further, insofar as you’re out to use this paper as way to make a point about more general problems (and I’m not clear if you are or not), I think you undermine yourself by speculating (although I appreciate your honesty in admitting that you are speculating). To be clear, I think you’re on firm ground on the narrow issue of the weakness of the rejection-leads-to-higher-citation result. On that narrow issue, I certainly agree that the result is too weak to be worth getting very excited about (although I still don’t understand why you didn’t just say that much and stop, without going on to speculate about people’s motives). But if you want to start a broader debate about science and scientific publishing practices, I think you’re going about it wrong. Speaking as someone who has some experience successfully starting debates among scientists, I think you’re going to need to do far more than just “put your perspective out there for people to swallow”. Unless you can make a plausible, evidence-based case on those broader issues, people aren’t going to choose the red pill or the blue pill. They’re not going to side with you or against you. They’re just going to ignore you, or maybe briefly consider you and then go back to ignoring you. And that’s not because people aren’t aware of or don’t care about these broader issues–far from it. A lot of scientists care a lot about peer review, scientific publishing practices, and media reporting of science these days. There’s already a lot of very well-informed debate going on, as I’m sure you’re well aware. So with respect, I’m confused what you think you’re adding to existing debate about these larger issues by spinning a speculative story about the motives of the people who reviewed and reported on one particular result in one particular paper. Honestly, I think that for many scientific readers, your speculations about people’s motives will cause them to take you less seriously, and make them less likely to pay attention to you. I have no idea if you care about that or not, maybe you’re just happy to say what you think and people can take it or leave it, which is absolutely fair enough. But if you actually want to influence people and change minds (rather than, at best, preach to the converted), all I can tell you is that I think you’re going about it in very much the wrong way.
It’s good to know we agree on the central issue that there is only weak evidence (at best) for rejection leading to higher citation rates, while it’s safe to say we differ on many other issues concerning rhetoric. And thanks for the advice on what you see as the most effective blogging style – I’m just getting the hang of this media myself, and it’s good toget the perspective of someone with more experience.
Jeremy,
No, I also don’t believe that the editors of the most prestigious journals are sitting back going “BWAHAHA, we’ve got control over the submission/rejection process and we’re going to make the poor bastards run the gauntlet before they get published, if at all”. Conspiracy no, very unlikely, but that doesn’t mean that unconscious viewpoints and culturally-conditioned expectations and a degree of arrogance of these journals, explained away by “well that’s just the way it is at the top”, aren’t at play that favor publishing a study like this.
Hi all. Just to let you know, what is really interesting to editors in our paper are the flows and connections between the different journals (which is the main result). This is what several journal editors have expressed interest in. Not quite the results on citation count, even though obviously they are intriguing. One reason is that for a given journal, there is most of the time not enough data to look at this effect (even though it shows up if we look at, e.g. PNAS alone).
Regarding the size of the effect, I would not claim it is big. Seriously, who’d think the simple fact of having been rejected before would explain a major part of citation count? An interesting talk by Adam Eyre-Walker last August at the Evolution meeting in Ottawa explained that even expert evaluation of manuscripts did not really help improve prediction of citation counts…
That said, Figure 4A is not ideal to estimate effect size (my fault). This is why I have posted there more explanations and better figures there on my website (http://vcalcagnoresearch.wordpress.com/2012/10/23/the-benefits-of-rejection-continued/).
See it by yourself. I would not say the effect is tiny, even though this is subjective. 30% fewer chances of not being cited at all, for instance, is maybe not negligible.
A last word about the buzz effect: yes, I was sad journalists over-focused on this specific aspect of the results. I’ve had to moderate their writings a couple of times as they wanted to write that resubmissions were “substantially more cited”. Most recent example: this post on Impact of Social Science:
http://blogs.lse.ac.uk/impactofsocialsciences/2012/10/25/calcagno-prepublication-history-citations/
My original title was “What can we learn from submission patterns?” and it was turnt into “Rejection leads to higher citations” when posted, without telling me…….
Thanks very much for stopping by and clarifying some matters on which I think there’s been rather too much ungrounded speculation. And for offering your very sensible comments on the narrow issue of the rejection-increases-citations result. I think your comments will be particularly valuable for the many readers who haven’t had personal experience with preparing a high profile paper and then trying to manage the response.
I think this episode is an object lesson in how difficult it is to control how one’s work is received by the wider world (both by other scientists, and the media). I remember having conversations about this in grad school with labmates who were writing what they knew would be high-profile Nature papers. They sweated over every phrase in their papers, went through the press releases with a fine tooth comb, etc. And in one case the result that everybody ended up talking about was a minor, throwaway one that wasn’t at all the focus of the paper. And in the other case my labmate ended up getting crazy questions from the media about whether his protist microcosm experiment showed that global warming was going to kill off polar bears.
I really enjoyed reading this discussion. I just made a more extended comment to Casey’s blog entry on our article. Here I’d just like to made one particular point about the magnitude of the observed effect. While I totally agree with everyone’s intuition that the effect of review on the citation count of a paper will not be huge, it’s also important to keep in mind that it wasn’t possible to control for absolutely all confounding variables. Quoting from my comment on Casey’s blog: “Thus, the observed effect is attenuated by the presence of other confounding factors that we couldn’t remove (this is simply due to the size of the dataset and the variables that were available to us). So the way I interpret the published result is that (a) there *is* an effect but (b) we really don’t know how big it is. I suspect, for instance, that the size of the effect could vary by field, journal, and publisher. These would all be really interesting factors to tease out in a followup study.”
Pingback: We’ll never get rid of “salesmanship” in science (and wouldn’t want to) | Dynamic Ecology