Post-publication review is still only for the scientific 1%, PubMed Commons edition

One function of pre-publication review is redestribution of attention: it ensures that every paper gets closely read by at least a couple of experts, and none gets read by many (no paper ever gets hundreds or thousands of pre-publication reviews). Post-publication, attention (by any metric you care to name) is very highly concentrated: a small fraction of papers (the “scientific 1%“) attracts a large fraction of scientists’ collective scrutiny. Which is one reason I why I’ve long been a skeptic that post-publication review, in the form of uninvited reviews or other comments, can ever replace (as opposed to supplement) pre-publication review. Under post-publication “review”, the vast majority of papers don’t get reviewed in any meaningful way (e.g.).

As evidence for this view, I and others have noted that many journals have long had online systems allowing comments on their papers, but those systems are little used and the vast majority of papers attract no comments. Some advocates of post-publication review recognized that, but chalked it up to the clunky design of journal commenting systems, and/or the fact that no single system allowed people to comment on papers from many journals.

The advent of PubMed Commons a couple of years ago put our views to the test. PubMed Commons allows commenters to comment on any paper indexed by PubMed–which is pretty much every biomedical paper and then some. And it was proposed and designed by very sharp, prominent people who think that the key obstacles to a culture of post-publication review are technical.

Turns out I was right. I searched PubMed Commons for all articles that have comments. (conveniently, PubMed Commons provides a link on their homepage to conduct this very search!) The search returns 3082 articles. By contrast, PubMed indexed 1,186,208 articles in 2014 alone.*

I can imagine various responses to this, if you think post-publication review can and should replace pre-publication review (and yes, there are those who think that). Probably some of my imagined possible responses are ones that nobody would actually make, but I’ll just list them anyway.

  • You could argue that the problem is that there are now too many centralized hubs where people can comment on published papers, causing many people to just not bother commenting because they’re not sure of the best place to do it. I highly doubt it. If you surveyed scientists and asked them why they don’t do post-publication review, they mostly wouldn’t say “because I’m not sure of the best place to do it.”
  • You could point out that the number of commented articles on PubMed Commons is growing exponentially, which it is. To which I’d respond that at this rate it’d take many years at best for the majority of PubMed articles to be commented, depending on the growth rate of the entire PubMed database and whether the growth rate in the number of commented articles ever slows down. But we’ll see, I guess.
  • You could claim that the vast majority of published papers don’t receive serious pre-publication peer review either. That claim suffers from having no evidence to support it.**
  • You could acknowledge the issue and try to fix it by urging everybody to please publicly share all their private discussions about published papers (all your journal club meetings, email exchanges, Facebook exchanges, post-seminar questions, private face-to-face conversations…) To which, good luck with that, you’ll need it. Plus, what makes you think that most papers get thoroughly discussed in private? People’s journal club meetings, emails, etc. etc. are mostly about the same high-profile papers that get the bulk of the attention by other metrics.
  • You could argue that the issue isn’t important enough to be worth worrying about. I disagree, of course, but beyond that I’m not sure how to respond because that argument implies a totally different vision of what peer review is for.
  • You could grant the point and use it to argue for some post-publication system of invited reviews.
  • Or you could take what I think is the reasonable view that post-publication review can be a useful supplement to pre-publication review, but mostly for high profile papers.***

But quite possibly, there are other responses I haven’t thought of. Which is what the comments are for. Looking forward to them, as always.

p.s. I freely admit that this post focuses on one narrow issue with post-publication review. Please don’t read it as a blanket condemnation of any and all post-publication review, because it’s not.**** I’m just writing about one issue that I think is important enough to be worth discussing, that isn’t discussed as much as it should be*****, and on which I could easily obtain a bit of data.

p.p.s. Before anyone asks, I also tried to figure out how many papers have PubPeer comments. I couldn’t figure out how to extract those data, but if anyone knows, please tell me. Given that there are recent comments over there from PubPeer users, remarking on how most articles never get commented on PubPeer, I’m guessing that looking at PubPeer data would reinforce my argument. But happy to be proven wrong.

*So maybe we should be talking about post-publication review being for the scientific 0.01%. Note that the fraction of papers that get cited at least once, while infamously low, is over three orders of magnitude higher. So at least judging from PubMed Commons data, post-publication review attention is much more concentrated than post-publication citations.

**No, the fact that a tiny but growing proportion of papers get retracted does not show that the vast majority of papers receive only cursory pre-publication review.

***Especially if we develop agreed norms and practices to govern it. But I freely admit I have no idea how to do that.

****I’m not opposed to any and all post-publication review. Quoting myself: “[A]dvocates of post-publication review are rightly impressed with high-profile cases in which it works. It’s very rare for the scientific record to rapidly self-correct, so we should be glad when it happens.” I’m just not a utopian. I don’t like focusing exclusively on the upsides of any new thing while hoping that the downsides will somehow get addressed. Sorry. I’m aware that can make me a downer sometimes.

*****Well, as far as I know it’s not discussed all that much. But I don’t often lurk in the sorts of forums where those discussions would be mostly likely to happen. So maybe I’m unaware of tons of discussion of the fact that most papers get no post-publication review. FWIW, I did a bit of googling, and didn’t turn up much. Just a few posts (e.g.,this, this, and this) raising the issue, often along with other issues, without much in the way of plausible solutions from advocates of post-publication review. I did find the suggestion that somebody (it’s not clear who) should increase scientists’ incentives to do post-publication review. Overlooking that there’s already one incentive to do it, but it’s not a very nice one.

38 thoughts on “Post-publication review is still only for the scientific 1%, PubMed Commons edition

  1. Not surprising at all.Scientist have not much time to do review and this produce no reward. Only pathological paper will attract enough attention to get spontaneous peer review (and blog post by the way).

  2. I think the biggest impediment to an active post-pub review process is the presence of a perceived-decent pre-pub review process. Couple that with the fact that you get even less recognition for your contribution for post-pub than for pre-pub review and the incentive is just not there.

    Here’s my (fuzzy) image of a world with good post-pub review. People post papers to something like ResearchGate where they are open access. Community members can comment and score papers in a forum style, like StackExchange, where other members can up/downvote or reply to comments. The authors have the opportunity to modify their manuscript in response.

    On the reviewer side, there is incentive to comment/score manuscripts because you have a reviwer score that can be cited on your CV (see StackExchange). Further incentives are necessary. First, it would be important to incentivize follow-up reviews, after manuscripts are revised — offer more reviewer points for follow-ups, or track follow-ups, which show follow-through as a reviewer. Second, to ensure a better distribution of reviews across papers, there would need to be an incentive to review under-reviewed papers — more points for a first and second review, or something.

    But which papers should we read? Papers could be ranked based on the reviews: good reviews from influential people (based on eg. impact score in RG) would increase a paper’s score more than a good review from a less influential person. Same idea for bad reviews. Paper scores can also be changed by reviewers in response to revision.

    All this ranking and scoring, especially the use of personal impact scores, is bound to be a little cringe for the modest among us. But I think it’s worth keeping in mind that all this ranking exists, it’s just typically not enumerated on a CV. There’s a reason I don’t get asked to review for Nature or Science — it’s because the editors don’t think I’m high impact enough, which they know because of the number and impact of my publications.

    • “I think the biggest impediment to an active post-pub review process is the presence of a perceived-decent pre-pub review process. ”

      Hmm, not sure about that. I bet that a lot of people who do comment on papers post-publication are only prepared to do so because they know those papers have already been reviewed.

      “On the reviewer side, there is incentive to comment/score manuscripts because you have a reviwer score that can be cited on your CV ”

      And who cares about that enough to make a material difference, and why should they? I’ve sat on faculty position search committees. Nobody really cares how much reviewing you do. Certainly not relative to how much they care about things like publications, grants, awards, your research statement, your teaching philosophy…And to the extent that they do care, they like to see that you’re being *invited* to review by selective journals. Because that’s evidence of your standing in the field. It shows that other good people respect your opinions and want to hear them. None of this has anything to do with lack of a quantitative metric of reviewing. You could say the same for other people tasked with evaluating scientists, such as granting bodies. Why should NSF or NERC or NSERC care one whit about how much pre- or post-publication reviewing I do, whether it’s quantified by a metric or not? Their job is to fund scientific research, not ensure that people do reviewing. Same for, say, my employer. I’m explicitly obliged to do teaching, research, and service to keep my job. But service is considered the least important of the three, and doing reviews is only one small part of service.

      “Second, to ensure a better distribution of reviews across papers, there would need to be an incentive to review under-reviewed papers — more points for a first and second review, or something. ”

      That’s awfully easy to game. And to the extent that, somehow, people do start caring about their scores, people *will* game it.

      Sorry to be a downer. You’ve laid out what I take it you consider an ideal world. But I don’t think you’ve laid out a way to get to that world from our existing world. I wouldn’t say that the sort of radical shift in culture and practices that you’re suggesting is impossible. But I think if it does happen, it will take a *very* long time–more than a decade or two, I’d say.

      • “I don’t think you’ve laid out a way to get to that world from our existing world.”

        We’re already on our way, and two decades is not a long time IMO. Look at all the increased functionality on sites like ResearchGate, including Open Review, Comments, posting of unpublished work. Some journals are offering review publication. Publons is a thing now. I’ve recommended some ways to incentivize our community to adopt these features, but I think it will only take time, *assuming* people allow it — when I propose these ideas to colleagues, many become emotionally antagonistic, which is something I really don’t understand, and an attitude that I think stands in the way of progress. As though I’m attacking the institution of Science or something.

        Here’s a thought experiment: What would happen if a very influential scientist decided that, instead of submitting their next “high-impact” manuscript to Nature, posted a final draft to ResearchGate? Would the manuscript get noticed? Might people feel compelled to comment/review it? Who would feel compelled to do so? Would that paper be doomed to perpetual obscurity? Would it never receive enough review for people to feel confident it’s solid science? I think somebody should try it, but I’m not influential enough — Jeremy, Megan, Brian, you’re up!

      • “What would happen if a very influential scientist decided that,…”

        Plenty of people already post to preprint servers, particularly in certain fields. I think their numbers will grow slowly in other fields, and I suppose that growth might be accelerated a little bit if famous people led the way. But that’s changing the question, surely. The question I asked in the post is not about where people publish scientific papers, it’s about how to ensure that most or all scientific papers get read carefully by at least a couple of qualified people. (well, unless you want to follow Jeff Houlahan’s tentative suggestion that maybe it’s fine if most papers never get read carefully by anyone, at any point).

        As Mike Fowler already noted, the vast majority of preprints attract no substantive attention. There’s no reason to think that would change automatically if more people posted more preprints. Indeed, if anything I’d expect it to work the other way–distributions of attention in many areas of life tend to get *more* inequitable and concentrated as the number of things that people could pay attention to increases. (e.g., the distribution of website traffic was less skewed back in the early-to-mid 90s, when there were may fewer websites). And while I agree with you that preprints by famous people tend to attract more attention than preprints by unknowns, that just illustrates my point so I’m very unclear why you raised that example.

        I appreciate your comments, but I’m afraid that they nicely illustrate why I personally don’t post much on these issues. In my admittedly-anecdotal experience, advocates of post-publication review run together lots of issues that in my view are best kept separate.

        Of course, my personal preferences about what sorts of conversations to have don’t matter to anyone but me. The important question is whether this tendency to run together different issues holds back or accelerates the adoption of new publishing and reviewing models. I’m not sure of the answer. Could be some of both. On the one hand, if you run together different issues, you tend to misdiagnose problems and their solutions, which holds you back in various ways. You waste resources trying to solve the wrong problems, or trying to solve the right problems with solutions that are bound to fail. PubMed Commons for instance must’ve taken a fair bit of time, effort, and money to build–and in two years there are comments on a trivially small fraction of papers (and the bulk of commented papers just have a single, often non-substantive comment or two, btw). As another example, if like another commenter you misdiagnose the fundamental problem as “arbitrariness” or “lack of democracy” in review and publishing, you’re going to be sorely disappointed when you discover that alternative models are no less arbitrary and no more democratic. And by running together and misdiagnosing different issues, you fail to get buy-in from people like me, who have specific concerns that they’d like to see addressed before they adopt the publishing and reviewing model you’d like them to adopt.

        On the other hand, social change is rarely fast or easy. And you can argue that it only happens if some people are unreasonably confident that it will happen. Unreasonably willing to ignore or gloss over obstacles and criticisms. Unreasonably sure that, if you just “move fast and break stuff” (as the Silicon Valley slogan goes), other people will either find a way to figure out how to fix what you broke, or will stop caring that you broke it. So I dunno. Maybe if the whole world consisted of incrementalist, small-c conservatives like me, things wouldn’t improve as fast as they could. (and conversely, hopefully the existence of some small-c conservatives helps ensure that things don’t change too fast, so that we don’t end up breaking lots of things that aren’t broken.)

    • I think the biggest impediment to an active post-pub review process is the presence of a perceived-decent pre-pub review process.

      Repeat the analysis on arXiv, or bioRxiv. From memory, the (public) commenting rates on arXiv are not really any better than those for published articles. That’s from a community that happily and actively uses pre-print servers to post their work.

  3. Jeremy, I don’t think that my comment would fall under any of those you mentioned – if we had a system where scientists knew that the only review would be post-publication and still only 1% were reviewed it would suggest that the vast majority of publications get so little attention that they really don’t warrant review. I hear statistics thrown around that imply that the vast majority of papers have very few citations – does it make sense that each of those papers got several hours of attention from 2-3 scientists when almost nobody is using those papers to inform their work? I’m not sure I’m in favor of post-publication review but it does seem to allow for the possibility that many papers contribute so little that the effort invested in reviewing them outweighs their contribution. Jeff H

    • “it would suggest that the vast majority of publications get so little attention that they really don’t warrant review. ”

      I was waiting for somebody to suggest this argument. The argument that the concentrated distribution of attention itself is a form of “review”.

      I’m not sure how to respond to this myself. I think a lot depends on how attention would get concentrated in a hypothetical world with no pre-publication review. Hard to imagine how that would work, since it could be quite different than in our current world. In our current world, the distribution of post-publication attention has a lot to do with what happens pre-publication (e.g., a lot of post-publication attention accrues to papers that made it through pre-publication review at top journals).

      It’s puzzling to me that advocates of post-publication review often talk about how, in a world without pre-publication review, everybody will pay a lot of attention to whatever papers leading experts recommend on social media. Which is somehow supposed to be radically different and much better than…everybody paying attention to whatever papers the leading experts who sit on leading journal editorial boards “recommend” by accepting those papers for publication.

      • Actually, we do know what this will look like : facebook. A minority of paper will get the review while all the rest will be buried in the noise.

      • Yes – it will remain concentrated as Yvan said.

        It will also probably be much more stochastic and random – just like which videos on YouTube go viral. Content certainly matters but it depends on a lot of little chance events of timing, which person with lots of twitter followers retweets it, etc. Not my idea for the best replacement of careful, top-down-managed peer review.

      • @Brian:

        “It will also probably be much *more* stochastic and random” (emphasis added)

        Yeah, that’s my guess too. Any self-reinforcing viral process with a stochastic component (like social media sharing) is going to be *really* stochastic. The positive feedback loops that make it viral also are noise-amplifying. Social media is one big Matthew Effect.

        And to the extent that’s not true, I think that what goes viral may well be predictable but in unfortunate ways. My favorite, admittedly-anecdotal example, is the most-viewed and most-downloaded Plos One ecology paper. It’s about fellatio in bats. I think it’s fairly predictable that a paper with the word “fellatio” in the title would go viral and end up being viewed hundreds of thousands of times. Of course, advocates of post-publication review would no doubt counter that “glamour mags” like Nature publish “sexy” but insubstantial papers too.

      • I’m not convinced that eliminating pre-publication review would result in a better review process – in fact, it’s a bit difficult to imagine how it would given that we have both pre- and post- now and would be removing one of them. So, my point isn’t that eliminating pre-publication review would improve the process – it’s that the cost to the quality of the review process might be offset by the reduced workload. That is, that the time saved doing fewer reviews could be spent on things that (perhaps) make a greater contribution to scientific progress. I’m not sure this is a compelling argument for eliminating pre-publication review and it strikes me as difficult to test…but it seems, at least, plausible.

      • I would agree with Jeff Houlahan that the increase in the number of papers has created a crushing review burden, and pretty large number of these papers are inconsequential and probably don’t merit review. Or maybe they are consequential and still don’t merit review. The main thing is that I think we need to find ways to reduce the time burden that scientists face, and reducing the amount of peer review has to be part of that, I think.

      • @Arjun:

        Well, at leading journals that’s effectively what happens already. Many papers get rejected without review.

        A long time ago, I proposed what’s effectively a rule to ensure that individuals’ reviewing effort matches the amount of submitting that they do. Which individuals can ensure either by doing a lot of reviewing, or by submitting less (and so presumably only bothering to submit their best stuff). I proposed the rule in part because I personally don’t like getting editorial rejects and would like to see a world where they’re not necessary. It didn’t go anywhere, though something like it is now part of a couple of reformist publication operations.:

        An Oikos editor, and a former editor, are fixing the peer review system

        Great minds think alike (when they're trying to fix peer review)

        At least in ecology, I’m not sure that the burden on reviewers is yet at a crisis point (perhaps because of increasing use of desk rejections to hold things together). I used to think it was at a crisis point, but when I collected some data, I was somewhat pleasantly surprised:

        Do individual ecologists review in proportion to how much they submit? Here’s the data! (UPDATEDx4)

      • @Jeremy: In principle, I like the idea of everyone reviewing proportional to their output. In practice, this ends up exerting pressure down the academic food chain, and I think this is unlikely to change because, as you say, that’s how people are. In my field, there has been a huge increase in the number of papers, and even with desk rejections, *someone* is reviewing those papers.

      • ” In practice, this ends up exerting pressure down the academic food chain”

        I don’t quite follow (it’s early where I am…). Are you suggesting that, in practice, this rule won’t be followed, with better-established people failing to pull their weight?

        “even with desk rejections, *someone* is reviewing those papers.”

        Yes, it’s certainly possible that desk rejections are merely a stopgap and that at some point there will be too many submissions to keep up with even via desk rejections.

      • Ah, sorry, I was unclear, but you got my meaning: yes, I think the bigwigs are definitely not pulling their weight. I think there was some study putting numbers on that a while ago. Which is fine, sure, there’s a food chain, but at some point it becomes untenable.

  4. although the post addresses some of the reasons why the currently practiced post publication review process (pprp) does not work, it misses the most fundamental ones. its because the current system of giving away scientific credit sucks. just imagine, if no one will pay any heed to where you have published your research papers and the impact factor(s) of the journals, then it will completely be a diff ball game, won’t it? the current system of pprp obviously does not and will not work unless we fix the underlying cause and get into a meritocratic and democratic way of judging scientific output. once the underlying problem is fixed, everything, including the pprp will work. of course, there are a lot of loose ends to tie and the pprp currently in place is far from being perfect but we must address the underlying cause and not try superficial solutions to a wound that is deep. the current system of giving away scientific credit is undemocratic, feudal and unscientific (http://ow.ly/SlfMn) and that must change first before anything else. thank you.

    • Thanks, glad you liked the post and pictures.

      Afraid I can’t promise to engage in comment threads on other sites. But I had a quick look and basically all I can say is that your experience with pre-publication review is quite different than mine. Pre-publication reviewers for the journals for which I submit, review, and edit are almost always quite thorough. That may of course reflect the journals I tend to submit to, review, and edit for. That’d be journals like Science, Nature, PNAS, Ecology Letters, Proceedings B, Ecology, Am Nat, Ecological Monographs, J Anim Ecol, J Ecol, Funct Ecol, Trends in Ecology and Evolution, Ann Rev Ecol Evol Syst, and Oikos.

    • One more brief response to your comment: as you’re probably aware, both non-anonymous reviewing and open reviewing make many people refuse to review. Not many people are prepared to write critical things for public consumption under their own names.

    • Thanks, had a quick look. It’s interesting. If I understand correctly, the problem they’re trying to solve is bringing the cost of traditional scientific publishing way down. So they’ll let arXiv host the articles, and then act as a filtering service. But the way they run that filtering service looks *totally* traditional to me. Any author of an arXiv preprint on discrete analysis can submit their preprint to the journal (heck, they’re not even searching arXiv for candidate articles themselves–they’re making authors submit them). The journal has the usual sort of editorial board, reviews the preprint in the usual way–invites referees and so on–and decides on it in the usual way. In particular, note that they only want to publish “interesting” work. So they are definitely *not* taking the view that we ought to just publish everything on arXiv and then let “the crowd” sort it out “democratically” via social media sharing or PubPeer comments or whatever. Indeed, they say that they’re trying to be as “conventional” as possible.

      This is the sort of reform or experiment that I tend to like. A narrowly-focused experiment designed to solve one well-specified problem, while changing or breaking other stuff as little as possible. That’s what evolution by natural selection does (macromutations, and mutations with lots of pleiotropic effects, are less likely to increase fitness). If it’s good enough for evolution it’s good enough for me. 🙂

      My one technical question is that they say the papers will only be “published” in their journal. I’m not clear how that can be the case. They’re arXiv preprints, anyone can look at them or link to them. Why couldn’t somebody else start their own competing arXiv overlay journal on discrete analysis in which authors could also “publish” their papers again if they so chose? I’m not saying anyone should do that, I’m just unclear what would stop them.

    • The more I think about this, the more I like it. In part because I think it’s a great troll of people who want to tear down the whole scientific publishing and evalutation ecosystem and start over. I’m being a bit facetious here, of course–I know it’s not intended as trolling. But in a way, it kind of is trolling. On the one hand, this arXiv overlay journal gives the revolutionaries something they really want–free publishing for both authors and readers–and does so by building on something they love–arXiv. But on the other hand, it accomplishes that by using and thus entrenching practices they hate, namely peer review by a couple of anonymous invited experts, and publishing decisions by an elite editorial board.

      It’s always an interesting dynamic to watch when revolutionaries are offered some but not all of what they want, in a way that makes it less likely they’ll eventually get all of what they want. So I’m rooting for this to really take off, just so I can watch people’s reactions. 🙂

      • What I like about the new journal there is that it unbundles things that no longer need go together in the modern world. Before the Internet, “publishing” in a journal was both: 1) a certification system by editors and peers who were attesting that your work was worthy, and 2) a way to get your work printed up and distributed so that other people could read it.

        Now that we have the Internet and various repositories/archives, there is not as much (if any) need for “journals” to fulfill function 2 anymore. Anyone can “publish” their work at any time for the entire world to read. The only thing lacking is a reason for anyone else to pay attention.

        So this new journal quite emphatically says that it is only about function 1: you as the author have to typeset your article and make it available at arXiv, while the “journal” will consist of nothing more than the certification by a few experts that this article is actually worthwhile.

      • Well put.

        Minor aside: I personally would worry a little about how much typesetting a journal of this sort would make me do as an author. I really don’t like having extensive typesetting downloaded onto me…

      • And to be clear, I think that this is an incredibly valuable function. It is inefficient for us all to wade through every article in our field, let alone other fields (or journalism, etc.). There will always need to be curators who certify that something is worth reading so that other people don’t have to discover it for themselves by wading through thousands of papers. We all need proxies for quality at some point. So the real questions are 1) how to make those proxies/curators work more accurately more of the time (i.e., not publishing fake stem cell papers or fake gay marriage papers just because they’re exciting); and 2) how to get the proxy/curation function to work more cheaply and efficiently by no longer paying for printing up paper magazines to ship to libraries.

      • Again, well said.

        “We all need proxies for quality at some point.”

        One sometimes gets the sense that people who think the current system is somehow undemocratic or fundamentally unfair either don’t understand this or don’t agree with it. Although perhaps I’m misinterpreting people who really hate certain proxies that I think are mostly ok.

        Minor aside: not sure that fake stem cell papers and fake gay marriage papers are the best examples of the sort of thing we want pre-publication review to weed out. Pre-publication review isn’t well equipped to detect many kinds of fraud.

  5. the argument that any review system, including the post-publication one, will be a faulty one may be true but that derails what essentially is a much stronger argument and that is “the existing system of peer review fails in promoting scientific excellence in a transparent and democratic way”.
    about the comment “One more brief response to your comment: as you’re probably aware, both non-anonymous reviewing and open reviewing make many people refuse to review. Not many people are prepared to write critical things for public consumption under their own names”.
    why? because “you scratch my back and i scratch yours” has been in practice by the elitists to stay relevant and they fear that by making the review process non-anonymous, they have a lot to loose. why can’t scientists be scientists and be critical of others’ work based on data? why can’t be the existing peer-review process be non-anonymous? i am yet to hear a strong argument.

    • “why?”

      Because that’s the way people are. Especially people not in positions of power or who worry about retaliation.

      And I’m just describing the way people are. I don’t have to give you an argument or explain why people are that way. That’s the way they are, and so if you want to reform peer review, you have two choices: change how people are, or take people as they are and propose a reform that will work anyway. With respect, I don’t think you have much shot at either if your plan is to insist on open, non-anonymous review and then berate the many people who won’t do it as corrupt. That’s not how you win hearts and minds.

      We’ll clearly have to agree to disagree. Thanks for your comments.

  6. Pingback: Link Round-Up: Academic Journals and the Publishing Industry

  7. Pingback: Friday links: consequences of stopping the tenure clock?, faculty trends, and more | Dynamic Ecology

  8. Pingback: Please don’t “make science transparent” by publishing your reviews | Scientist Sees Squirrel

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.