In finance and economics, the answer is: pretty random. Ivo Welch has a new open access paper based on data from eight leading economics and finance journals. He shows that different referees typically exhibit only modest agreement in their recommendations, and that disagreement is not just a matter of some referees being pickier than others. Rather, disagreement arises in large part from differences among referees in their relative ranking of papers. If you think of referees as trying to estimate some “objective” (i.e. referee-agreeable) attribute of a paper, then referee decisions turn out to be one part “signal” (i.e. 1/3 dictated by that attribute) and two parts referee-specific “noise”. Discussion from James Choi here.
It’d be very interesting to do a similar study on ecology journals, eh? The purported crapshoot nature of peer review is something everybody (including me) has strong opinions about–based only on the small, non-random samples provided by their own personal experiences. Now that online ms handling systems are ubiquitous, I’ll bet the (appropriately-anonymized) data would be really easy for journals to provide.
So, who wants to take the lead in approaching the journals and asking for the data? Come on, Chris, this is right up your alley!
And assuming someone can obtain the data, anybody want to place any bets on whether ecology referees’ recommendations are more or less random than those of economics and finance referees?
UPDATE: I tried to write the post so as to avoid revealing my own view on whether 1/3 signal is “bad”, “good”, “the best we can do”, or whatever. But in the comments, I’ve been misread as arguing for PLoS ONE’s editorial system (which asks referees to judge papers only on technical soundness). So let me reveal my own view. If, as I’d guess, ecology reviews also are about 1/3 “signal”, I’m mostly ok with that, for two reasons. First, referee agreement or disagreement on whether a paper should be published isn’t, or shouldn’t be, the most important thing about reviews. Handling editors don’t, or shouldn’t, just count referee “votes”. They should use the reviews to inform their own judgments on how to make the paper better, and yes, on whether to accept the paper. That was my practice when I was a handling editor. Second, as discussed in previous posts (see here, here, and here), we are never going to do away with judgments about what work is “interesting” or “important”. Further, and importantly, there is just as much disagreement on those judgments among post-publication “reviewers” (i.e. readers) as there is among pre-publication reviewers. So as a reader with very limited time to read, I’m mostly ok with letting selective journals and their referees do a lot of “filtering” of the literature for me, as I’m used to this filtering system and I think the alternatives would be worse. Just because referees and editors of selective journals are making judgment calls, and just because a fair bit of reasonable disagreement is possible on those judgment calls, does not mean those judgment calls should not be made, or that they’d be made “better” (as opposed to merely “differently”) by some other system. But I’m old and set in my ways, so I would say that. 😉
UPDATE #2: Just wanted to highlight an ongoing exchange of comments with ace commenter Jim Bouldin. Jim expressed a view that I suspect is widely shared: the level of randomness and subjectivity in referee recommendations is appallingly high, and has no business affecting the direction of science. Commenter Mike Taylor expressed a similar view. To which I responded: referees are us. If referees disagree a lot about what science is most worth paying attention to, well, that just means we all disagree with one another a lot about what science is most worth paying attention to. Those disagreements will not go away, or become any less “random” or “subjective”, or stop affecting the course of our science, if we reform or replace the current pre-publication peer review system. That’s not necessarily an argument against any proposed reform of peer review, of course. It’s just a clarification of what any proposed reform would accomplish. “Eliminating or substantially reducing the random and subjective elements of our collective decisions concerning what science to pay attention to” is not a feasible goal for peer review reform, or for reform of any other aspect of current scientific practice. You can shift around where the randomness and subjectivity enter into our collective decision making process about what science is most worth paying attention to, and there may well be good arguments that they should only enter the process at certain points, but you can’t eliminate them. I don’t mean my comments on this as the last word by any means (Jim has indicated that he’s planning to reply in due course). But because many readers read the posts but not the comments, I wanted to highlight this ongoing discussion and encourage readers to follow it, as I suspect it’s a discussion of particular interest to many readers.
My money is on ecology being right at that 1/3 reproducible level as well. Maybe on the low end since Choi’s blog seems to imply they think higher reject rates lead to lower reproducible levels. I find it very ironic that I’ve sat in so many committee meetings where professors harped on whether a students novel measurement method of an ecological property was repeatable and consistent, yet we so rarely turn that lens on ourselves. Thanks to the authors of this paper for doing so.
The 1/3 reproducible also is very consistent with my own experience. On my very first paper ever in graduate school, it got rejected twice and got a total of 6 reviews. Three were very positive and three were very negative. To salve my ego, I made up a page entitled “are these reviewers reading the same paper?” and then included quotes from each of the six reviewers. I posted it on my door through the rest of my PhD and postdoc which got me through numerous rejections. At this point I’ve internalized the lesson enough it is no longer posted on my door, but I still pull this page out for students when they get reviews that I feel say more about the reviewer than the paper.
I’ll spare you all of the quotes (it filled a whole page), but I’ll give you a couple of snippets just to show how contrary they were:
“This is a clearly written, insightful investigation ” vs. “. In general, the manuscript is not well-written and gives the impression that the submitted version is not the final version.”
and
“I was disappointed in the approach…[the main result] is a fairly trivial consequence [of the assumptions]… These are hardly worth calling results.” vs. “I found the approach novel, timely, and potentially of great relevance for the advancement of the field.”
After a baptism of fire like that I’ve learned to trust my own inner voice rather than flap in the wind with every review (doesn’t mean I get into the journal I think it deserves any more often but it preserves my sanity better). This DOES not mean I ignore all comments. It means I ignore about half and gratefully incorporate about half using my judgement to figure out which. Sounds like maybe objectively speaking I should only be incorporating about 1.3 …. 😉
On a side note, if it is true that higher rejection rates lead to lower reproducibility, what does this say about NSF grant reviews and their reproducibility?
I too really like having data on this. It forces each of us to get away from focusing exclusively on our own experiences and reflect on exactly how repeatable we think the reviewing process ought to be. Obviously, it would be too much to expect all referees to be in perfect agreement. And conversely, it would be rather disturbing if there was no “signal” at all in these data, so that referee opinions were totally independent of any properties of the paper being reviewed. Which leads to the $64,000 question: are you ok with 1/3 “signal”, 2/3 “noise” (and I too would bet that data from ecology journals would come out with about a 1/3:2/3 ratio)?
“if it is true that higher rejection rates lead to lower reproducibility, what does this say about NSF grant reviews and their reproducibility?”
And what does it say about high rates of rejection without external review at leading journals?
My own best anecdote about the randomness of the review process (which I think I’ve related before on this blog) is a paper of mine (Fox et al. 2010) that was rejected without review by Am Nat, and after addition of one minor appendix and other very minor tweaks, was accepted at Ecology with the best reviews I’ve ever gotten in my life. That experience was what prompted me to start thinking about ways to reform the peer review process so that leading journals would no longer feel the need to reject stuff without review, eventually leading to the idea of PubCreds.
But as ridiculous as that Am Nat editor’s decision was, the fact is that I’ve had lots of positive experiences with the peer review process as well. And I’m far from alone: big international surveys find that academics have positive views of peer review overall (e.g., they feel it improves their papers). Of course, maybe that just means that we all manage, as you did, to learn to accept the peer review process. That is, we lower our expectations until they’re low enough that the outcome of the process mostly meets or exceeds our expectations! 😉
Of course, a key question about these data is whether it makes sense to think of papers as having an objective “quality” attribute that referees are all trying to estimate. Perhaps what agreement there is is just a statistical epiphenomenon. Every referee is looking for something different, and it just so happens that about 1/3 of the time, the two referees that happen to be chosen are two who tend to look for roughly the same thing…
The other thing I’ll say is that, when I was a handling editor, I didn’t really care too much what the referees recommended in terms of reject/revise/accept. What I really wanted was the details of their review–the strengths of the ms, the weaknesses, how the ms could be improved, etc. I felt like I was in a much better position than the referees to decide what mss were worth publishing in Oikos (either as is, or after revision). Which perhaps just changes the locus of randomness, since I have the sense that at Oikos there was quite a bit of disagreement among editors about what sort of thing was worth publishing in Oikos.
I agree with (and apply myself) your editing philosophy, that the editor exists to do more than count reviewer votes. But it raises the next big followon question to the study you originally cited. Do more experienced reviewers/editors have greater reproducibility/consistency among themselves? If not then your “just changing the locus of randomness” hypothesis is true. I would like to think it is not true, but as an editor I am hardly an objective observer.
Curious to hear what our fellow Dynamic Ecologist Chris Klausmeier thinks of this. In the past, when I’ve suggested that the review process has a fair bit of randomness in it, Chris has pushed back.
From the Welch Abstract:
To me, this suggests that the ‘randomness’ component lies as much in the choice of referee, as in the reproducibility of decisions for a single manuscript.
Oh very much so. As I said in other comments, referees are us. That referees disagree, and that the recommendations of referees might well change if the editor had chosen different referees, are just symptoms of the fact that we all disagree with one another.
Referees differ in their “relative ranking of papers” – this is unsurprising and not worth worrying about. The subjective ranking of papers is, well, subjective.
To echo Jeremy Fox, a reviewer’s main role is spotting possible errors and misinterpretations, checking that the paper fairly discusses prior literature, etc. Even on more objective matters than ‘ranking’, different reviewers will know about and care about different things, so will inevitably disagree. It is the role of the editor to select reviewers with expertise covering all the topics and methods of the paper, and to weigh their comments and expertise.
If you’re not clear on why you’re being selective, you’ll end up making poor choices. PLOS ONE (which I work for) has dispensed with the subjective ranking of papers, and our acceptance rate has stayed steady over the years despite rapid growth in both submissions and number of handling editors. Selecting on the basis of soundness and correct reporting seems to be relatively objective.
Thank you for your comments Matt, but that actually wasn’t my main point. I will update the post to clarify. When I was an editor, I didn’t just welcome referees’ technical criticisms. I also welcomed their views on why a paper was interesting or important. Oikos, the journal for which I used to edit, did not have PLoS ONE’s editorial policy and I wouldn’t have wanted to see them adopt that policy. Just because “importance” or “interest” is a judgment call on which referees (and editors) disagree doesn’t mean I think the judgment call shouldn’t be made at all, or that I think it is best left solely to readers.
One question I am always wondering about: how competent are average reviewers really? Often, papers I get to review are so complex and cover so many disciplines that I am NOT qualified for part of the paper. I always point this out to the editor, but will she or he send the paper out for ANOTHER review because of this? I think not…..
I can’t speak for other editors. But when I was an editor, I always made sure that, collectively, the referees’ expertise and my own expertise covered all the material in the paper. And as a referee, I do what you do: make clear what bits of the paper I’m qualified to review. That’s all you can do as a referee, and as an editor I always appreciated when referees did that.
well, that makes you a far-above-par editor 🙂 I’ve seen several cases where important aspects remained un-covered because each party assumed the other was taking care of it – as a result utter BS got published.
This is very scary indeed — no less so because it doesn’t really come as a surprise. As a rather junior researcher myself, I’ve always hoped that my elders’ implicit trust in the peer-review process is based on something solid that I’ve somehow missed, but this study seems to disprove that.
One horrible consequence of this is that is suggests that best way to get your papers into the high-impact journals that make a career (Science, Nature, etc.) is not necessarily to do great research, but just to be very persistent in submitting everything to them. Keep rolling the dice till you get a double six. I would hate to think that prestige is allocated and fields are shaped on that basis.
A few months ago I wrote on my own blog:
Well, you’re clearly someone who’d like to see more than 1/3 “signal” in referees reports! So I’ll ask the obvious follow-up: if 1/3 “signal” is too low, what, if anything, should be done? Should we go the PLoS ONE route and just ask reviewers to evaluate “technical correctness” (it’d be interesting to collect the data and see how much PLoS ONE reviewers agree on that…)? As indicated by my reply to a PLoS ONE editor above, I myself wouldn’t want to go that route. I’m not thrilled with the relatively low level of agreement among reviewers. But I still think the collective judgment that they and the editors come to serves a useful purpose.
But then, I’m old enough to have grown up in the traditional system and to be set in my ways, so perhaps my views on this should be discounted appropriately. 🙂
“You’re clearly someone who’d like to see more than 1/3 “signal” in referees reports!” — wait, are you saying that there are some people who do consider this acceptable? How can that be? The whole point of expert reviewers is to provide expert feedback, i.e. feedback which correlates strongly with what is actually correct. If we didn’t care about this, we might just as well ask random people on the street for reviews.
(For what it’s worth, I do greatly prefer the PLOS ONE approach to filtering and selection. But I think that’s somewhat orthogonal to what we’re discussing here, which is whether reviewers are capable of delivering a meaningful verdict.)
Mike, the 1/3 “signal” found in the linked study refers to agreement among referees in recommendations as to the fate of the paper (accept as is, reject, minor revision, etc.) Reviews can provide plenty of useful expert feedback even lacking any agreement on whether the journal should publish the ms. When I was an editor at Oikos, I often found that referees would disagree about whether the paper should be published, but mostly agree on what bits of the paper needed revising and in what ways.
I’m not sure that makes a big difference, Jeremy. Whether reviewers are evaluating whether my work is any good, or whether it’s a good fit for the journal. the fact remains that 2/3 of their decision bears no relation to the work that I’ve done. I am baffled by the idea that anyone would consider that acceptable.
I would very much like to see the same experiment done for a PLOS ONE-style journal, so we can start disentagling how much of that 2/3 noise is inability to evaluate the soundless of a paper, and how much is inability to judge how well it matches a particular journal’s remit.
Again Mike, referee reports are about much more than whether a paper is a good “fit” for the journal. Referees make all sorts of comments on a paper, and recommend all sorts of changes and improvements, most all of which are totally independent of the journal to which the ms has been submitted (i.e. it’s not that the referees would make totally different comments, critiques, and recommendations for improvement, had the same ms been submitted to Oikos rather than Nature).
You seem to view referee reports as devoted to only two purposes: identifying purely technical mistakes, and advising the editor on whether the journal in question should publish the paper. Am I misunderstanding you? Because I see referee reports as serving additional, and more important, purposes besides those two.
No, it’s not that I think those are the only two purposes of peer-review. But they are the only two that the current study addresses (which is reasonable enough since they are the only two suceptible to quantitative analysis, and even then they have to be combined).
So given that we now have evidence that the go/stop decisions are only 1/3 signal, what should we assume about the other components of reviews? Perhaps the null hypothesis is that they, too, and only 1/3 signal? It would be great if we could come up with an experiment to measure this.
But I have certainly had the experience, and I bet you have too, of different reviewers making opposite change requests — e.g. one wants me to cut a section completely while another thinks it’s the best thing in the paper and wants me to expand it.
Thanks for the clarification
BrianMike.I wouldn’t infer anything from the Welch study on the amount of agreement among referees in terms of their other comments. As I said in another comment, I’ve certainly seen cases where referees disagreed on whether an ms should be published, but largely agreed on what revisions were needed. But as you say, agreement or disagreement on comments like “You should expand your discussion of issue X” or “You should also present analysis Y” isn’t really amenable to quantitative analysis.
p.s. I don’t think the issue of whether to go with a PLoS ONE approach is orthogonal to the topic of the post. As evidenced by the comments of a PLoS ONE editor above, one of the main motivations for PLoS ONE choosing the editorial system they chose is the view that referee and editorial judgments on what’s most worth publishing are too random and arbitrary to be worth making at all.
Which is why I’d like to see the same study done on a PLOS ONE-like journal. My intuition is that the review signal:noise ratio in such venues would be better than 1:2, but by how much? (And in any case, this study is one more reason not to trust our intuition!)
I share the intuition that referees would agree more on whether a given study should be published in PLoS ONE. But as you say, intuition might be untrustworthy here. I myself have had a paper rejected from PLoS ONE! Obviously, I strongly disagreed with that decision–the paper was sound and not technically flawed (limited and boring, but sound). And I have a colleague who regularly publishes in highly-selective journals who also had a paper rejected from PLoS ONE that, in his view and mine, was technically sound.
FOr those interested in more empirical research on this topic, might I suggest:
Cole et al. 1981. Chance and consensus in peer review. Science 214(4523): 881-886. http://dx.doi.org/10.1126/science.7302566
Abstract
An experiment in which 150 proposals submitted to the National Science Foundation were evaluated independently by a new set of reviewers indicates that getting a research grant depends to a significant extent on chance. The degree of disagreement within the population of eligible reviewers is such that whether or not a proposal is funded depends in a large proportion of cases upon which reviewers happen to be selected for it. No evidence of systematic bias in the selection of NSF reviewers was found.
Thanks Zen! I wasn’t aware of that work.
What a mess. Peer review is nothing but an utter mess of random subjectivity cloaked behind anonymity so as not to have to take any responsibility for this reality. WE DON’T NEED IT.
I don’t see how anyone can possibly look at a 1/3 agreement in manuscript verdict between reviewers and think that’s somehow acceptable. I’m with Mike 100% on this.
Ok Jim, you don’t like the current system. What should be changed? (I mean that as an honest question, not a rhetorical one) Do away with reviewer anonymity? Everyone just submit everything to PLoS ONE? Do away with pre-publication review altogether?
And would any of those changes (or whatever others you might suggest) eliminate randomness and subjectivity, or merely displace them? After all, reviewers are us. If reviews are mostly “random” and “subjective”, that means that, collectively, we all disagree with one another a fair bit about which papers are most worth paying attention to. That disagreement doesn’t go away, or become any less “subjective”, if it’s disagreement among readers rather than disagreement among pre-publication peer reviewers. See this old post for elaboration.
Excellent questions Jeremy, requiring a considered response.
I agree with Jim and Mike on this and think a portion of our disagreement is due to something you’ve mentioned twice now – “which papers are most worth paying attention to” and “as a reader with very limited time to read, I’m mostly ok with letting selective journals and their referees do a lot of “filtering” of the literature for me.” As a scientist (a paleontologist, like Mike), I feel the need to pay attention to every paper in any given topic I’m working on. I would never ignore a paper based on its journal, nor would I want correct but “un-interesting” data hidden away in someone else’s computer where I can’t use it. I can’t see how it’s in my interest to hold back any data that is correct. It would be nice if peer review held back the inaccurate data, like those written by the crackpots in my field or wrongly coded data matrices, but it doesn’t as far as I can tell, and both examples are found in even the most prestigious journals.
Thanks for your comments Mickey. Like you, I pay attention to every (ok, most every) paper on whatever topic I’m working on (e.g., spatial synchrony of population dynamics in my case). I suspect the same is true for most ecologists. But that far from exhausts the literature that I want to keep track of, and again I suspect the same is true for lots of people. I’d like to think of myself as broadly in touch with what’s going on in all sorts of areas in which I am not actively working. For instance (to pick just one example of many), my recent post on the impact of Mayfield and Levine (2010) on phylogenetic community ecology, a topic on which I have published nothing and have no plans to do so.
If all I needed to keep track of was “papers directly relevant to whatever topic I’m working on”, I could just do keyword searches on Web of Science or Google Scholar. That approach is not feasible if what one wants to keep track of is, say, “ecology and evolution”.
In that case, our difference may be that I think our system of publication/review should be catered toward what’s best for researchers doing more science as opposed to what’s best for readers keeping up with subjects they don’t work on.
Hope it’s OK if I jump in on this. I’ll be interested to see Jim’s considered responses, but in the mean time hopefully I can provide some food for thought, even if we disagree.
Top of the menu for me is a cultural change. In my field (vertebrate palaeontology) there is a very strong tendency for people to equate having gone through peer-review as a stamp of quality. So it becomes a binary property of a paper: peer-reviewed or not. We routinely hear people say things like “there’s no point thinking about the issue that manuscript raises, it’s not been through peer-review yet”. That’s wasteful and inefficient; but the converse, which also happens, is actually dangeous: people assume that if a paper has made it through review, then it’s correct-until-proven-otherwise. If we could just get rid of those flawed assumptions, quite a few of my problems with peer-review would go away.
(It occurs to me that maybe part of the reason you and I don’t see eye to eye on this subject is that your discipline may not have this “peer review is the gold standard” assumption that mine does — which would make the arbitrariness and wastefulness of the current system less appalling.)
And now to the suggestions you offered:
Absolutely, yes. Unaccountability is always a recipe for abuse. I understand the reasons why anonymity is sometimes desirable, but it does far more harm than good.
Taken literally, of course this would be bad: a publishing monoculture wouldn’t be to anyone’s benefit.
But more broadly, I would be very happy to see all papers published in megajournals with PLOS ONE-like selection criteria (i.e. scientific soundness matters, guesses as how impacty the paper is going to be do not). It’s been many years since I read a paper because of what journal it’s in. That kind of branding is useless to me as a scientist. (It may be of value to administrators looking a quick and easy way to evaluate scientists, but I have no interest in providing measures that enable them to reach the wrong conclusions more quickly.)
This is the nuclear option, isn’t it?
I do think pre-publication review has significant value. On that basis I want to retain it. On the other hand it also has significant cost, including some costs (such as reformatting when sending to a different journal after a rejection) that are just plain stupid for scientists to be spending their time on.
I am honestly not sure whether the benefits outweigh the costs.
p.s. to Mickey: re: technical errors sometimes making it through peer review, yes, that happens, even at leading journals. But pre-publication review also catches a lot of errors. And if you think technical errors would be caught as well or better by other systems, well, I strongly disagree. See this old post: https://dynamicecology.wordpress.com/2012/10/11/in-praise-of-pre-publication-peer-review-because-post-publication-review-is-an-utter-failure/
I do have to agree that so far, most forms of what we’d classify post-publication peer-review have not worked at all well. In particular, the commenting facility at PLOS is horribly underused, and even when it’s used at all it doesn’t work well in terms of catalysing a discussion — all you get are little sermons.
But that doesn’t necessarily mean post-pub is the wrong idea; only that we’ve not yet found the right approach. It may not be helping that we currently invest so very-much effort into pre-pub that potential reviewers don’t have a lot left in the tank.
To be clear, I am not at all saying that I know what the answer is here.
Another reason for the low agreement between reviewers may be partly because editors deliberately seek out reviewers that have differing points of view and different areas of expertise.
Good point. Not sure how often editors seek out deliberately-contrasting views when choosing referees. But it certainly could be a by-product of seeking out complementary expertise, which is certainly something I did a lot when I was a handling editor.
Well, hang on. “Complementary” doesn’t mean “opposing”. If I were handling a manuscript on the histology of sauropod cervical ribs, I might choose reviewer with histo experience and one who knows sauropods, but would hardly be a reason to expect them to reach different conclusions on the quality of the submission or its suitability for the journal.
Hi Mike,
It may not necessarily be the case that those with complementary expertise will have opposing views on whether an ms should be published, and no one said that. But for various reasons that I’m sure you can imagine, it’s quite plausible. For instance, someone who finds no problems with the ms in their area of expertise might well recommend publication, while a second reviewer with different expertise might have criticisms in other areas and so recommend revision or rejection. Someone with expertise in one area might want to see the ms focus mainly on that area and so recommend revision, while another in a different area might feel the ms is fine as is. As an editor at the general ecology journal Oikos, I often found that referees with specialist expertise on certain organisms, who rarely if ever published in ecology journals, gave me different accept/revise/reject recommendations than those with general ecological expertise. I’m sure any of us can imagine all sorts of scenarios along these lines, all of which are more likely (not certain, but more likely) to occur where referees are chosen for their distinct, complementary expertise rather than “at random”.
I agree with all of this.
But it all seems like more of a reason to dump the current system. As you say, the people who are acting as a filter for us are us. So there’s no reason to expect them to do a better job of filtering for us than we would do for ourselves. Instead, we have randomly chosen people filtering on different criteria than we would use, because they are interested in different things. Really, who is better placed to determine whether or not I would find it useful or interesting to read a study on the histology of fossilised sauropod cervical ribs than I am? Much better just to publish it (provided it’s not pure junk, of course) and leave it for me to decide whether I want to read it.
Why do we even need a consensus on what’s interesting or important? If it’s interesting to you and not to me, that’s no reason for me (if I’m a peer-reviewer) to block you from getting to see it.
Just publish.
But my colleagues interests and judgments about what’s “interesting”, “important”, “novel”, etc. do overlap to some greater or lesser extent with my own. They’re filtering on different criteria than I would–but not completely different criteria. And collectively, my colleagues can read a lot more stuff than I can personally. So I’m happy to trust them to do my filtering for me.
Further, I’d rather trust the filtering provided by my colleagues when acting as pre-publication reviewers for selective journals, because when acting in that role my colleagues (like me) read carefully and critically. When not acting in that role, my colleagues (like me) read much differently. We read casually and quickly, we often just read the abstract and skim the figures, etc. Which is why I would prefer not to rely on post-publication filtering provided by my colleagues, as indexed by, e.g., metrics of paper views or downloads, comments on the paper, links to the paper, Facebook shares of the paper, or whatever. I’m freely grant that there’s room for reasonable disagreement on this. Different strokes for different folks and all that. But as a first-pass filter (and that’s all it is–it’s just one way, among others, of narrowing the field of papers I might choose to read myself, in order to keep up with the broader literature), I personally would rather trust the judgment of a very small number of colleagues who have read a paper carefully and critically than the judgment of a large number of colleagues who are reading casually.
As far as the need for a consensus, sure, if you don’t give a crap about the consensus, ignore it. Don’t bother to read Nature and Science and PNAS and etc. No one’s forcing you to. It’s simply up to you to decide if there are costs to that approach, and if those costs are worth paying. Personally, I would feel like I was uninformed of what was going on in ecology if I made no particular effort to take note of ecology published in Nature, Science, PNAS, and leading ecology journals, and I think at least some of my colleagues would feel me to be uninformed (or perhaps very narrow-minded). I do not want to feel uninformed about the broad field of ecology, and I do not want my colleagues to think me uninformed or narrow-minded. If you think that makes me a slave to peer pressure, well, I’m not happy that you feel that way about me, but you thinking me a slave to peer pressure is a cost I’m prepared to pay.
We now know what that extent is, and it’s lesser: 1/3. (Or worse: the authors of the paper discuss two reasons why their figure is an upper bound on the true amount of non-random agreement.)
Putting it another way, if reviewers at a high-impact journal are recommending acceptance for one paper in ten, the chance they’ll pick the one that you would have been most interested in are not good at all.
Again, that makes perfect sense to me. What doesn’t make sense to me is that you’d think the way to keep up with your field is to read the articles that make it into a particular set of journals.
When you look back at the articles from ten years ago that have actually had a legacy — that people are still citing ten years on — what proportion of them were in Nature, Science or PNAS? This may differer between fields, but in palaeo I would estimate that it’s perhaps one in 20 or 30 influential papers.
Again: if you want to be truly informed, and not merely fashionable, you need to read what’s good, not merely popular.
Sorry Mike, I think we’re going in circles at this point. I’m afraid I’m unsure what else to say in order to clarify my views. It’s been a very good discussion, and judging from some tweets I’ve seen, other readers have found it valuable as well. But I’m afraid I don’t know what else to say that would keep the discussion moving forward rather than going in circles. If I can think of something new to say or some very different way of putting my point of view that might “click” with you, I’ll come back and comment further. But until then I think it’s best if I sign off.
Agreed, I think we’ve reached the end of the road. Thanks for the discussion, it’s been good.
BTW., on re-reading my comments here, I realise I’m coming over as rather insulting and supercilious. My apologies for that. It’s really not my intention, I am just writing in a rather tone-deaf way. Hopefully you can pick out the content and ignore the snark.
No wories Mike, and no apology needed, at least not to me. I haven’t found your comments insulting in the least. As far as I’m concerned, we’ve been having a vigorous but perfectly productive and professional discussion (for which thanks very much!) As I said in my reply to your previous comment, I do feel like it’s best if I drop out of the discussion now, at least until I think of some way to move things forward. Perhaps others will chime in and keep the discussion going. But I’m absolutely not stepping aside because I found anything you said to be insulting or supercilious. It really is just that I’m not sure what else to say that I haven’t said already.
Pingback: Selective journals vs. social networks: alternative ways of filtering the literature, or po-tay-to, po-tah-to? | Dynamic Ecology
Pingback: Bonus Friday link: the reproducibility of peer review | Dynamic Ecology
Pingback: Why ecologists might want to read more philosophy of science | Dynamic Ecology
Pingback: Friday links: 7-year postdoc, every major’s terrible, citation Matthew Effect, KILL BLOGS WITH FIRE, and more | Dynamic Ecology
Pingback: Friday links: scientific publishing is unfixable, “Big Replication”, new ecology teaching videos, and more | Dynamic Ecology
Pingback: Pre- vs. post-publication review and procedural justice | Dynamic Ecology
Pingback: Friday links: revisiting old papers, life after lab closure, and more | Dynamic Ecology
Pingback: Friday links: women and STEM awards, grant review is not a crapshoot, and more | Dynamic Ecology
Pingback: Well, that about wraps it up for peer-review | Sauropod Vertebra Picture of the Week
Pingback: Don’t be afraid to disagree publicly with Dr. Famous; it won’t hurt your career | Dynamic Ecology
Pingback: Friday links: statistical significance vs. statistical “clarity”, philosophy of science vs. cell biology, and more | Dynamic Ecology