Controversial ideas about scientific publishing and peer review: poll results and commentary

Note from Jeremy: I’m traveling today so comment moderation may be slow. Sorry.

************************

Recently we polled y’all on your views on various possibly-controversial changes to the scientific publishing and peer review system. Here are the results!

tl;dr: the “controversial” ideas included in the poll mostly aren’t all that controversial–or all that popular.

Demographics

We got 408 respondents (thanks everybody!), including a good mix of grad students (22%), postdocs (32%), faculty (31%), government/NGO scientists (11%), and others (4%).

Respondents are mostly based in N. America (61%) or Europe (25%).

As always with our polls, this isn’t a random sample of all ecologists, or even of our readers, though it’s probably pretty representative of our regular readers. But it’s a large and diverse enough mix of people to be worth talking about.

Results first, then some commentary.

Responses

There’s a wide range of opinion on the statement “peer review and scientific publishing are broken and need massive changes”, with the overall distribution leaning towards agreement:

Just over half of respondents prefer double-blind peer review to other forms, with a pretty even split of the remaining respondents among the other forms:

Those who don’t want reviewers paid outnumber those who do by almost 3:1 (32% to 12%), but they’re both outnumbered by the 45% who say “it depends”:

Only 17% of respondents want all journals to evaluate mss solely on technical soundness, vs. 63% who don’t:

Only 7% of respondents want to abolish pre-publication review entirely, vs. 74% who don’t:

Only 12% want all journals to be author-pays open access, vs. 70% who don’t:

Finally, when asked what information they’d like to see used to evaluate scientists and their work in contexts in which the evaluators lack the time and/or expertise to do a thorough evaluation themselves, the most popular choices were “how many times the work has been cited” (chosen by 67% of respondents), letters from experts (62%), and “journal in which the work was published” (50%). “Altmetrics” only got 22% support, and only 26% of respondents included “other” among their choices. Only 17% chose “mentions of the work in the media”.

Crosstabs

There wasn’t any association between career stage and opinion as to whether “everything is broken”.

Respondents who strongly agree that peer review and scientific publishing are broken and in need of major change weren’t more likely than others to support the specific changes included in the poll. The few respondents who strongly disagree that “everything is broken” are less likely than others to prefer double blind review, and more likely than others to oppose reviewing only for technical soundness. But those could be blips due to small sample size.

Preference for double-blind review decreases with increasing seniority: it’s preferred by 67% of grad student respondents, 61% of postdoc respondents, and 41% of faculty/NGO/government scientist respondents. Conversely, single blind is more popular among profs than people in other positions, although even among profs it only got 25% support.

There wasn’t any obvious association between opinions on different proposed reforms. For instance, it’s not the case that people who want all journals to review only for technical soundness also want all journals to be author-pays open access.

In a result that may surprise Brian, Europe-based respondents weren’t more likely than others to support moving all journals to author-pays open access. Brian had thought they might be because many European countries provide funding to cover open access publication fees. But it’s not a massive sample of European opinion, plus one can imagine many reasons why someone might not favor universal author-pays open access even if their national government pays the fees.

Respondents who reported knowing a lot about or having thought a lot about an issue didn’t have opinions appreciably different than those of other respondents, save that they were less likely to indicate “maybe/not sure/no opinion”.

Faculty and government/NGO scientists weren’t any more likely than others to choose “journal in which the work was published” as a way of evaluating scientists and their work.

If there are other crosstabs you’re curious about, let me know in the comments and I can check.

Commentary

  • Turns out many of these “controversial” ideas aren’t very controversial–or very popular. The overarching claim that peer review and scientific publishing are “broken” is quite controversial (though as an aside I wonder a little if people who think everything is “broken” were especially likely to take the poll…). But that dissatisfaction didn’t translate into much support for most of the radical reforms included in the poll. Except for the issue of whether to pay reviewers, and to a lesser extent the issue of author blinding, there just isn’t much disagreement here. Basically, there’s just not much of a constituency for turning every journal into Plos One, or replacing all journals with arXiv. And old-fashioned indirect methods for evaluating scientists and their work–citation counts, reference letters, and publication venue–remain much more popular than the newfangled method of altmetrics. Maybe all this will change in future, of course–or maybe it won’t. But that’s the state of play right now. My admittedly-anecdotal impression is that people who want major changes to the status quo tend to dominate online conversation about peer review and scientific publishing. But these data suggest that the online conversation doesn’t reflect the overall balance of opinion in ecology. Whether it should or could are different questions, of course…
  • People who strongly agree that peer review and scientific publishing are broken weren’t much more likely than anyone else to favor the specific reforms included in the poll. That surprises me. It might indicate that the poll failed to include other radical reforms that would garner wider support among those who strongly agree that things are broken (any suggestions as to what those might be?). Or, maybe the reforms I polled on are just too radical, but respondents who agree “everything is broken” would be more likely than others to support somewhat less radical reforms. Or, perhaps people who feel strongly that things are “broken” just feel worse about the status quo than other people, without differing from others as to the nature or magnitude of changes needed. That is, maybe my question “is everything broken?” basically just picks out a gradient from “glass half full” people to “glass half empty” people.
  • I’m slightly surprised that more junior respondents weren’t more likely to say “everything is broken”. Perhaps that’s another point in favor of the hypothesis that the “is everything broken?” question just picks out a “glass half full” to “glass half empty” gradient. There are optimists and pessimists at every career stage.
  • In general, the bigger the proposed change, the less support for it. It’s easy for journals to switch to double-blind review, and some leading ecology journals have already done so. Conversely, doing away with pre-publication review was the most radical change polled, and drew the least support. That makes sense, I think. It’s easier for people to get behind a reform that is already being implemented, or might plausibly be implemented. And it’s easier to make the case that a more modest reform will improve on the status quo without having undesirable side effects that make it worse than the status quo on balance. Even many people who don’t like the status quo will worry–often quite sensibly!–that some radical alternative would be even worse.
  • I suspect that another big driver of these results is that people like having choices. I bet many people like having the option of choosing an author-pays open access journal–but few want that to be their only option. I bet many people like having the option of choosing a journal that evaluates mss only on technical soundness–but few want that to be their only option. Etc. The exception–the majority preference for double-blind review–is one where the other options are seen by many as possibly-unfair or otherwise undesirable, and where there’s no obvious benefit to any individual author to having a mix of options.
  • I’m slightly surprised unblinded review got even 14% support. I had thought hardly anyone supported it.
  • I’m not surprised double-blind review was the most popular choice. It was the most popular choice in a big global survey of scholarly opinion a few years ago.
  • I’m not surprised that support for double-blind review decreases with increasing seniority, but I wouldn’t make much of that because double blind review is the most popular choice even among faculty. It sure looks like double blind review is the future, unless lots of junior people start changing their minds about double blind review as they progress in their careers (and why would they?)
  • I suspect that a decent number of people would support paying reviewers if it could somehow be guaranteed that the payments would come out of Wiley’s or Springer’s profit margins…
  • This poll is just a snapshot. It would be interesting to know if opinion on any of these issues is changing, and if so how fast. It sure looks like opinion is shifting towards double blind review. I suspect that support for universal author-pays OA, abolishing pre-publication peer review, and universal review only for technical soundness aren’t growing, at least not much, but that’s pure speculation and could be wrong.

And finally

Tell me: are you surprised there wasn’t more support for the various reforms included in this poll?

11 thoughts on “Controversial ideas about scientific publishing and peer review: poll results and commentary

  1. Really interesting results and I’d agree about the glass-half-full-everything-is-broken personality tending to dominate online discussions. I try very hard not to engage with them, it saps a phenomenal amount of energy. I was kind of surprised that such a larger proportion did not want to be paid for their reviews. Perhaps I’m getting more mercenary but I’m much more inclined to do grant reviews, for instance, if I get paid: £90 here, £50 there, it soon adds up and I squirrel it away to pay for conference and field work travel, etc. Likewise doing talks for gardening and natural history groups (who usually have a speakers’ budget), PhD examinations, etc. In an age when it’s getting harder and harder to fund these things all of this provides an important income stream for me. Of course I still do a lot of unpaid work, but offer me some cash and I’m more likely to agree.

    • My lack of support was mainly driven by feeling like there’s no way that cost would actually come out of the pockets of the publishers. I’d love to be paid for reviews (and I’d love for people who review my manuscripts to be paid, too!), but it seems like it would end up increasing the cost for authors or libraries.

      • Before Owen and I came up with PubCreds, our idea was that authors could pay for their submissions with the money they were paid for reviews. In the aggregate, that works. But at an individual level, you might either make or spend a decent amount of money on balance.

  2. Pingback: Poll results: what are the biggest problems with the conduct of ecological research? | Dynamic Ecology

  3. Pingback: Universities pay for peer-review | Mathemagical Conservation

  4. Pingback: Poll results: here’s what (some) ecologists think about retracting old and superseded papers | Dynamic Ecology

  5. Pingback: Pseudoreplication vs. publication bias? | Dynamic Ecology

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.