Ask us anything: do current trends in scientific publishing accelerate scientific progress, or create chaos?

There is now a race to publish more papers, faster. There are now people who will publish anything, and journals that oblige them. Does this accelerate scientific progress, or create chaos? (from Lila Nath Sharma)

Jeremy: I don’t really know, and I’m not sure how anyone could know with much confidence–it’s an unreplicated, uncontrolled experiment.

One possibility is that it doesn’t have much effect on scientific progress at all. We can now publish a lot more stuff, but the time people allocate to reading hasn’t gone up. So maybe all this stuff gets published and then mostly just sits there unread, having more or less the same effect on scientific progress as if it had never been published at all.

More broadly, many changes in the scientific publishing ecosystem are just different ways of doing the same things scientists have always wanted to do for centuries. It’s often hard to say if different ways of doing things are better vs. merely different. The answer probably varies from person to person.

Personally, I like having the option to just get some minor piece of work reviewed and published, on the off chance that I’ll need to cite it in future or that it might prove of interest to someone. But it’s not always or even usually worth the cost (monetary or otherwise) to publish everything I do. There’s just not enough payoff, to me or to science as a whole. So I publish at more or less the same rate that I would if Plos One, Ecology and Evolution, Ecosphere, etc. didn’t exist. Which is not that high a rate–2-4 papers/year for me. I’ve always aimed for quality over quantity (that’s the goal, anyway!).

Brian: Absolutely, firmly in the camp of being detrimental to science. The number of papers being published is growing exponentially! The number of universities, the number of undergraduate students, and hence the number of faculty is not growing anything like exponentially (if its growing at all – tenure track faculty positions are actually on the decline in many countries). This explosion is breaking peer review (more papers to review than reviewers). It is making it impossible to follow the literature.  It is wasting a lot of time. And it is not advancing science. 50% of papers are never cited once. I have a few of these myself – its not shameful to be wrong occasionally about what is important and useful to other people. But 50%!? Do we really need all those papers? It seems to me certain that that time could have been better spent on teaching, on service, on building long term datasets, on risky research. And unlike many problems we like to blame on for-profit publishers, we have nobody to blame but ourselves on this one. I’m a pretty big fan of slowing down and focusing on fewer better papers.

29 thoughts on “Ask us anything: do current trends in scientific publishing accelerate scientific progress, or create chaos?

  1. A point I forgot to make: one way in which unselective open access journals could be a good thing would be reducing publication bias. But in practice, I’m not sure how much they do; don’t know of any data on this.

  2. I think you are both ignoring the amazing and wonderful growth of new ways to find, filter, and discuss papers and ideas. Sure, I don’t read more papers, but I certainly discover more directly relevant papers, encounter out-of-my-area ideas on Twitter that I can explore at leisure in their writeups, and have a much wider reach and “circle” than I would have 10 years ago. (In my twitter feed I see daily discussions of environmental metagenomics, pathogen detection, developmental biology, evolution and ecology, software engineering research, teaching research, and lots of other subjects.)

    I’m completely unsympathetic to the notion that we should have any form of prior restraint on publication just because rampant publication makes the world a more confusing place. Let’s use technology to do a better job of filtering that confusion and find relevant ideas (as we are) and stop wishing for a return to the good old days when a handful of senior people could more or less decide what was discussed. Sure, it was a simpler time, but I bet we’re going to get more creativity out of the current approach.

    • “I think you are both ignoring the amazing and wonderful growth of new ways to find, filter, and discuss papers and ideas.”

      Different strokes for different folks. I find Google Scholar’s recommendations somewhat helpful. And of course, when I’m searching for papers on a specific topic, I’m glad to have search engines and Web of Science. But other than that, sorry, new ways of filtering the literature don’t work at all for me, particularly when it comes to keeping up with the broader literature, and discovering papers I wouldn’t have known I wanted to read until I found them. And yes, of course would welcome new technologies that work better for me (e.g., Google Scholar somehow gets better at predicting what I will want to read). But it hasn’t happened yet, and in the meantime I can’t sit around waiting for it to happen. So I’m just going to keep filtering the literature the best way I know how, recognizing that my way of doing it wouldn’t work for everyone. I try to work in ways that work for me, where what works for me is in part, though not entirely, a complex function of what works for other people.

      More broadly, people vary in how they filter the literature–but old school methods are still predominant, at least among the ecologists who read this blog (though most people supplement old-school methods with some sort of newer method):

      As I’ve said in old posts and comment threads, at some point it’s quite possible that my ways of filtering the literature will just stop working for me. Too many other people will be publishing differently than they have in the past for my filtering methods to work anymore. At which point I’ll be obliged to change my filtering methods. But until that day comes, I’m not doing it wrong.

      “In my twitter feed I see daily discussions of environmental metagenomics, pathogen detection, developmental biology, evolution and ecology, software engineering research, teaching research, and lots of other subjects.”

      I’m glad that works for you, and I’m well aware it works for a lot of people. But you’re overgeneralizing from your own example if you think it would work for most or all people. There are *lots* of people for whom it doesn’t work. And if you look back through some of our old comment threads, you’ll find other people who use Twitter to filter the literature agreeing with me–they freely admit that they’re only able to do it because they happen to be interested in topics for which there’s a critical mass of people on Twitter who tweet about new papers.

      With respect, please don’t put words into my mouth. I didn’t call for, and don’t want, a return to some imagined Eden when a few senior people controlled what was discussed. At the risk of being accused of concern trolling, I just recognize that changes in technology and professional practice are rarely unmitigated goods. Rather, they’re good for some people and less good, or bad, for others. Everything has upsides and downsides, in part because what’s an upside for one person often is a downside for someone else. That’s not an argument for stasis. If I was in favor of stasis I wouldn’t be blogging! (and I wouldn’t have started to regularly check Google Scholar’s recommendations)

      • I certainly didn’t intend to put words in your mouth. But I’d be very interested in practical ideas to “tame the flood” that don’t involve prior restraint where someone, somewhere (presumably the more senior people or journal editors) makes decisions on what is worth publishing.

        I don’t know of a way to quantify it, but my impression is that a huge amount of scientific communication happens via backchannel networks, in-person meetings, and other informality. If more of that gets exposed via more casual publications, Twitter, and blogging, I think it’s a net win.

      • @Titus Brown:

        Re: “prior restraint”, see those old posts of mine I linked to in another comment. In practice, I’m not sure that “prior restraint” is all that different than post publication filtering. All filtering methods or combinations of filtering methods that I know of have the effect of concentrating most of the attention on a very small fraction of stuff. (This is true outside of the scientific literature as well. A very small fraction of books garner most of the sales and other measures of attention. Same for films. Same for websites and blog posts. Etc.)

        Ok, I’m sure newer methods of filtering concentrate attention on a somewhat *different* small fraction of stuff than would more traditional methods. (For instance, the most viewed Plos One ecology paper of all time is on fellatio in bats. It’s the most viewed Plos One ecology paper of all time thanks to going viral on social media.) But newer methods concentrate attention all the same. And I don’t think they concentrate attention on a “better” fraction of stuff than do more traditional methods. Nor do I think newer filtering methods are any more “objective” or “fair” (see those old posts for discussion on that last point).

      • @Titus:

        And re: Brian’s comments in the original post, I think you’ve missed that what Brian’s calling for (unless I misunderstood him) is more *self* filtering pre-publication. Does it really help you, or science as a whole, to publish *everything* you do, thereby putting a burden on other people’s filtering systems? Particularly given opportunity costs. Time spent publishing one project is time not spent doing and publishing another: As Brian notes, it’s also time not spent on other valuable things.

    • A couple of old posts on filtering the literature, of which my comments in the original post are just condensed summaries:

      Those posts both have good comment threads. Including some comments from noted nostalgic old stick in the mud Carl Boettiger mostly agreeing with me.🙂

    • Who proposed prior restraint on publication? I just said we should all stop trying to Red Queen each other to death with publications we wouldn’t have thought worth publishing 20 years ago.

  3. I don’t have a problem with a large number of scientists publishing in a large number of journals. It is easy to pick out the ” important” ideas in retrospect, but much more difficult early on. The predatory journals/ conference publications with no effective peer review or standards might be a serious problem. Particularly if people have a hard time identifying the culprits.

  4. I have just reviewed a single study that was published 3 times not quite word for word but could be one paper. This trend in scientific publishing of tidbits is very bad for ecology and I am really with Brian on this one. Charles Krebs

    • Eww, yuck. I’m curious whether the previous versions were in reputable journals?

      Yes, one downside of publishing so much stuff is that it makes it easier to self-plagiarize. Though it’s my impression that plagiarism (including self-plagiarism) remains rare and mostly confined to little-cited papers in little-read venues. (And that insofar as it seems like it’s becoming a bigger problem, it’s because we now also have better tools for detecting it.)

      I think we linked to some data on this from physics in an old linkfest, based on a text-mining study of ArXiv preprints, but I can’t find it just now.

  5. I think there are two different issues going on here:
    (1) the tendency for it to be easier to publish now, and so the effort bar is lower, and more information is being published; and
    (2) the increasing competition and therefore the increasing emphasis on metrics pertaining to publishing — i.e. you need to publish *more* to get to the same place career-wise now than you did 10, 20, or 30 years ago

    I’m all for #1 and completely against #2.

    • I suspect you’re far from alone in liking #1 and disliking #2 Margaret. But don’t they go hand in hand? I mean, if it is now easier to get papers published (both in the sense of both “requiring less time and effort on the author’s part”, and in the sense of “you can now publish any paper that’s technically sound, even if no reviewer finds it novel/important/interesting”), isn’t it completely reasonable for employers and funding agencies to expect authors to have published more stuff, all else being equal*? Indeed, it’s a little hard to see how it could be otherwise. If the average of anything goes up, then what’s sufficiently above-average to obtain some reward (a job, a grant, etc.) also is going to go up.

      I mean, think back to the 1960s, when only a minority of US academics had any publications (we linked to data on this in an old linkfest, but I can’t find it now). The bar has been getting raised for decades (at least) in academic science. It didn’t start getting raised with the advent of the internet or Plos One.

      I guess to break the correlation between #1 and #2, employers and funding agencies would need to stop caring *at all* about how many papers you’ve published. Which I suppose they might. Indeed, now that I think about it, that might already be happening to some extent. It’s often noted, correctly, that having lots of papers in Plos One or other unselective journals tends not to do all *that* much for your job or funding prospects, all else being equal. That’s because employers and funding agencies recognize that it’s not that difficult to publish a lot of papers in unselective venues these days. That’s not at all a criticism of publishing in those venues–I’ve published in Plos One myself and I’m glad it exists. I just don’t think that having that Plos One paper makes even a slight difference at the margin to my career prospects, and I don’t think it should.

      *”all else being equal” is a key caveat, of course. Because it never is; employers and funding agencies of course look at *lots* of other things besides how many papers you’ve published. I suspect that the bar is getting raised in other respects besides just “how many papers are on your cv”, though I lack the data to say for sure.

      • Two comments:

        > isn’t it completely reasonable for employers and funding agencies to expect
        > authors to have published more stuff, all else being equal*?
        No, I don’t think so. I think people should be judged on the quality, not the quantity of their work. You sorta address this at the end of your comment.

        2. I think we have very different views of PLoS ONE. There are many possible reasons to publish there — not just “no one thinks this is impactful.” My single PLoS pub is highly cited. My advisor (first author) decided to publish there because what he had to say was (1) urgent; and (2) needed to be heard by managers outside academia who don’t have institutional library access. After my 2-year experience with my first first-author pub, I am finding myself very interested in journals that don’t require “fit”. My paper bounced from journal to journal because the methods journals saw it as a conservation paper and the conservation journals saw it as a methods paper. I was constantly re-writing intros, discussions, and cover letters to try to make the “fit”. Sorry, but a real waste of time. I’m starting to see from others’ experiences that the more you do novel *interdisciplinary* work, the harder it is to find a journal that “fits”. And it should be noted that what may not seem “impactful” now, might be very important in fifty years — lots of historical examples of this.

        So yes, I think papers (and the people who write them) should be judged on their merits — not just on where they’re published. And I think it’s great that people can publish sound research without having to show how it’s oh-so-important to global-issue-X or until-now-intractable-basic-science-question-Y.

      • @Margaret:

        “So yes, I think papers (and the people who write them) should be judged on their merits — not just on where they’re published.”

        Totally fair comment, with which I agree. I’d only add that, when others use heuristics (including, but definitely not limited to, looking at where you publish), that’s what they’re trying to do: judge your work on its merits. The use of *some* heuristics or other is necessary and inevitable:

        Good point re: people publishing Plos One for various reasons. Sorry if I made it sound like my own reasons for publishing in Plos One are the only ones.

        Re: historical examples of papers that don’t seem important now turning out to be important far down the line: yes, absolutely, that happens. You could’ve added that many papers that seem important now will turn out to be unimportant far down the line. But then again, the vast majority of papers that seem unimportant now will also turn out to be unimportant down the line. Absolutely, judgments of how important some bit of work is are very difficult, and there’s plenty of scope for reasonable disagreement on those judgments, which is why we *do* disagree on them a fair bit. And I agree that we may be particularly bad at judging certain sorts of work, such as interdisciplinary work. But one way or another, those judgments will (and should) get made, not just down the road but today. And so the only question is how best to do those judgments. Afraid I don’t have any specific ideas for how to do our evaluations much differently than they’re currently done, but I’d be interested to hear ideas.

      • Well said. (All of it.)

        Agreed that some heuristics are necessary. I tend to favor the use of metrics like number-of-times-cited as a measure of an individual paper’s worth. But of course, on average, that measure is correlated with impact factor, which is closely related to journal prestige… (But I still just want to publish my work quickly and with minimal hassle and not feel like I’m trying to play some sort of see-how-great-my-work-is game, which is how it feels to me now…)

  6. I’d like to point out that especially at the postdoc (and maybe assistant professor, but just speculation here) level, publishing more than needed/expected helps in dealing with the “perspective” problems experienced by many postdoc and it provides the “at least I am doing something tangible” answer.

  7. Another perspective on this: many of the low-novelty, low-impact, basic, observational short papers that repeat previous work on a different species, and don’t seem to move a field on at all, are the ones that are often being used to fuel meta-analyses or synthetic review that often DO move the field on. Many of us have benefitted from the basic papers that provide such useful data. Only history will say what papers are or are not useful in the future.

  8. I think that the sheer volume of research papers creates a challenge for new scientists (ones that are actually focused on progress and not simply attention). For established scientists I don’t think it is too much of a problem. I’m volunteering for the whale biologists up at Glacier Bay Nat. Park and it seems to me that the scientists here read research papers that are published by established scientists and papers that have good referrals. Not to say that the science community can be quite small and tight knit.

    The larger problem that comes with large volumes of papers being published is again for independent, unknown scientists to share their work. It also presents a challenge for non-scientists to find good, informative research papers to read (not that many people seek them out anyways.)

Leave a Comment

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s