Friday links: no one reads your preprints, Rbitrary standards, and more

Also this week: the recent history of ecology, mission creep in scientific publishing, work vs. you, meeting past future science greats, how to pitch your paper, the real origin of dogs, and more.

From Meg:

Zen Faulkes did an experiment where he posted a preprint to biorXiv to see what sort of response it generated. The answer: not much. His take home: “I learned that for rank and file biologists, posting work on pre-prints is probably just another task to do whose tangible rewards compared to a journal article are “few to none.” Like Kim Kardashian posting a selfie, pre-prints will probably only get attention if a person who is already famous does it.”

It’s baby bird season! Here’s an excellent poster from Bird and Moon showing what to do if you find one. I’ll keep my eye out for ones with a large claw on the second toe!

I just saw this older post by Ambika Kamath outlining a workshop she planned and ran with two other grad students on how to make science more welcoming for underrepresented groups. This seems like it would be really useful for lots of departments.

potnia theron had a post on trying to find a balance between taking care of yourself and getting enough work done. As she says, finding the balance is hard, and we’ll never really know if we have it right. But it’s important to try.

Perhaps it’s because I have an infant, but I think this SMBC on meeting past science greats is very sweet. (ht: Jacob Tennessen)

The ESA Student Section is doing a survey of current students on challenges and opportunities for ecologists in the 21st century.

Been there:

And before you say LaTeX is the answer, remember this recent Friday Link from Jeremy:

And, while I’m linking to tweets, watching the oxbow form in this is very cool!

From Jeremy:

I’m years late to this, to my embarrassment: Earth Days looks like a must-read history of ecology from the 1950s through the 1970s, a crucial time in the history of the field. Sounds like it’s good gossipy fun too. Anyone read it? Care to provide a capsule review in the comments? (ht Small Pond Science)

I’m on time for this book, though: writing in Science, Meg Lowman reviews Hope Jahren’s memoir, Lab Girl.

Via guest poster Isla Myers-Smith, Cahill et al. (2011) on how to “pitch” your next ecology paper. I might do a post on this myself at some point, because it’s so important and is often done badly (including by me).

Related to Meg’s preprint link: Zen Faulkes on mission creep in scientific publishing.

Science (well, mathematics) on screen: a review of the new Ramanujan biopic from a scientific perspective. Reviews from the perspective of film critics here and here.

Explaining Rbitrary standards (ht Andrew Gelman). A taster:

As a general rule of thumb, if you encounter something truly ludicrous [in R], don’t know where it comes from, and don’t see it listed here, randomly select from one of the following explanations:

  1. Backwards-compatibility.
  2. Nobody thought it was important to get right at the time.
  3. That still exists?! I thought we’d removed tha- oh, wait, backwards-compatibility.
  4. Scheme did that.
  5. S did that.
  6. APL did that.
  7. Lisp did that.
  8. That’s the only use case late-20th century pure statisticians have, and if it’s good enough for us it should be good enough for you.
  9. Are you kidding?! If we’d done it that way it wouldn’t work on Solaris 8!

The origin of dogs. 😉 (ht @dsquareddigest)

And finally, an April Fool’s Day link especially for Meg, who hates April Fool’s Day:

 

26 thoughts on “Friday links: no one reads your preprints, Rbitrary standards, and more

  1. On Zen Faulke’s preprint experiment: of course his sample size is one. But the comment thread there is pretty instructive – people generally don’t want to hear his point, so they talk past it. His point is that nobody read his preprint because he’s not famous; in this way, the preprint system advantages those who are already famous/privileged and disadvantages those who are not (Zen acknowledges he’s well known on social media, but deliberately did NOT actively leverage that to get attention to his experimental preprint). I had a post about this last year (https://scientistseessquirrel.wordpress.com/2015/12/14/post-publication-peer-review-and-the-problem-of-privilege/) – although I don’t expect it to change anyone’s mind 🙂

    • I had the exact same thoughts. The reactions to Zen’s preprint experiment were very revealing. Apparently, all the open access evangelists care about is making your work freely accessible. Even if nobody actually *wants* to access it.

      I wonder if part of what’s also going on here is a persistent misunderstanding about whether the internet is democratizing. A lot of people seem to have the mistaken impression that the highly skewed distribution of attention in science–many papers get little attention, only a few get a lot–is somehow a product of Nature and Science, or Ivy League universities, or something. So that it would go away if only everyone would publish in Plos One, or everyone would put preprints on arXiv. Democracy! No filters! Fairness! Of course, people can’t do without *any* filters, and so preprints still get filtered and still attract a highly skewed distribution of attention. And frankly, I don’t see what so fair about, say, social media as a filter, as compared to, say, Nature and Science as filters. I have various old posts on this. Which yeah, I don’t expect to change anyone’s mind.

      You’re right that this same issue comes up in the context of post-publication review. Pre-publication review is among other things an attention-equalizing mechanism. It ensures that a bit of close attention–and no more than a bit–is paid to every single paper (well, at least the ones that get sent out for review).

      I’ve had the same thought in another context. Occasionally, readers thank Meg, Brian, and I for writing the advice posts we do, because they think it’s democratizing. In the minds of those readers, Meg, Brian, and I are revealing to all the “rules of the academic game” that would otherwise only be known by some privileged elite of insiders. Leaving aside the fact that most of the advice we give is familiar to every prof, I highly doubt our advice posts are actually democratizing. We’re not giving advice “to all”, we’re mostly giving it to the readers of this blog and whoever they happen to interact with on social media. Which I bet is a *very* non-random sample of all ecologists, or all academics. For instance, I bet graduate students at top research universities are more likely to read our blog than graduate students at Obscure University. And we know from reader surveys that our readership skews male, though that’s slowly changing. To be clear, I don’t have any problem with that, if only because I don’t think there’s anything that can be done about it. Our readership is going to self-select, end of story. I just find it odd that a few readers seem to think of DE as somehow leveling the playing field just because *in principle* anyone could read us.

    • Just had a little discussion with Ethan White, Tim Poisot, and a few other folks about this on Twitter (an experimental foray onto Twitter on my part). Their response to Zen’s post was basically:

      (i) No, preprints aren’t all that democratizing, but they remove one kind of reputation barrier (journal reputation)

      (ii) Your experience with preprints is likely to vary a lot by field. Zen Faulkes isn’t in physics, or bioinformatics/genomics. Not that most preprints in physics or bioinformatics get lots of attention–they don’t–but *some* do.

      (iii) N=1, it’s just an anecdote, nobody should draw any conclusions from it. (though anecdotally, it seems like this point mostly gets made only in response to anecdotes people don’t like. 🙂 )

      • Well, on (i), that seems to concede a lot of the ground that is often claimed. I’d much rather see journal reputation as a “barrier” (=”source of information”) because you mostly have to earn it anew with each paper. (I’m suppressing an urge to make “joke” about who gets into Nature and Science….) Fame as a reputational barrier in a social context I’m much less happy about. So why would we want to move from the former to the latter?

        On (ii), true, but dubiously relevant – Zen’s point and mine are both about variance in attention, not mean.

        On (iii), I’m jealous that you, rather than me, thought of saying that. Fortunately, it’s n=1 and so doesn’t mean you’re actually smarter than me. #MetaEnoughForYou?

      • “I’d much rather see journal reputation as a “barrier” (=”source of information”) because you mostly have to earn it anew with each paper.”

        Yes, in an old linkfest (can’t find it now, sorry), I linked to a piece from Adam Eyre-Walker who pointed out that one advantage of pre-publication review over post-publication review is that an unpublished paper is, well, unpublished. So pre-publication reviewers can’t use “reputation of the journal in which the paper was published” as a shortcut to evaluating an paper, or even as a way of deciding which papers might be worth evaluating.

      • Another (very belated!) thought I just had, about the notion that preprints are democratizing because they remove one reputational barrier (journal identity):

        Why is journal identity a “barrier”? After all, if previously-unknown grad student Jane Doe gets a paper in Ecology Letters, well, now she’s no longer unknown! Conversely, if already-well-known academic Ethan White puts a preprint on arXiv, it’s going to get a lot more readers than a preprint by unknown grad student Jane Doe. Especially if Ethan also announces his preprint on Twitter to his thousands of Twitter followers. The point is, because people can’t filter arXiv preprints by journal identity, they’re going to filter them in other ways. Such as by “prominence of author”. So removing one reputational “barrier” may well just increase the height of some other reputational “barrier”. Perhaps with the net effect of making the distribution of attention more concentrated rather than less, and making it harder rather than easier for unknowns to become known at the expense of the already-known.

        The point is that “barrier” seems to me to be the wrong word here. The right word is “filter”.

        I am of course implicitly assuming here that the peer review process is fair–it’s not that famous people get papers in Ecology Letters *because* they’re famous, or that non-famous people get rejected from Ecology Letters *because* they’re not famous. My experience as an editor and reviewer is that the peer review process is fair in that sense.

  2. Andrew – I like your framing of the 3 hypotheses. Like Jeremy I think it almost has to be a mix.

    If you go back to my William Shockley post https://dynamicecology.wordpress.com/2014/01/23/william-shockley-on-what-makes-a-person-write-a-lot-of-papers-and-the-superstar-researcher-system/ and apply the hurdle model to papers being read instead of papers being written, it is still always going to lead you to a lognormal distribution of citations, which in fact is a very strong law. If you take the hurdles that have to be cleared for a paper to be read some are:
    1) This paper is on an interesting topic
    2) This paper advances the field
    3) This paper communicates well
    4) I know and like this author’s work
    5) Random network multipliers (got tweeted early in the day by somebody with lots of followers)

    Some of those (#1-#3) are definitely fair criteria. #5 is definitely not. And #4 – whether that is meritocratic or unfair is partly a question of time-scale – it is meritocratic over a career, it is not meritocratic over one paper. But what seems clear is readers will always use #4, and and despite the fact that #4 creates a bit of lock-in on reading famous people’s work, people are regularly breaking in and becoming perceived as the new famous people, so its hardly the kiss of death to early career people.

  3. Jeremy – re Gelman’s link on R. That is pretty on target. I may have mentioned this before, but I have programmed in over 20 programming languages in my life time and R is my absolute least favorite language. Its behaviors are very often counterintuitive. The fraction of time spent tracing down a bug that turns out because of R doing something stupid is way higher than other languages. R often seems to act as if it thinks its smarter than you are. But it rarely is. Computer languages should remain dumb and do what they say and nothing more.

    • In a weird way, this makes me glad I don’t know any other programming languages. I have nothing to compare R to, so none of its behavior seems weird or wrong to me.

      • loading a data file that has 1000 numbers and 1 “NA” and deciding to turn it into a factor with 1001 levels doesn’t seem weird or wrong?!

      • I also only use R for a limited range of mostly-simple purposes, the majority of which it was originally designed to handle. So I’m also much less likely than someone like you to encounter cases like the one you describe.

    • Brian, spot on. R is the most awful language ever. (Okay, maybe not *ever*, but it’s up there.) It’s also, unfortunately, useful, so I use it (but as little as possible). Whenever I want to throttle the people who developed R, I read the “aRrgh: a newcomer’s (angry) guide to R” and feel better. http://arrgh.tim-smith.us/ if you haven’t seen it.

  4. Re: the preprint stuff, I think what’s sometimes missed is the forest for the trees. If there were someday a scientific culture of open access or preprints or whatever flavor of un-paywalling scientific content you like to think about, then you *do* open up science for many who don’t have access right now. Those include people in developing countries (e.g. I have collaborators overseas who don’t have access to the paper we recently published) and those without good institutional library access (including ECRs between jobs). Of course any one individual paper doesn’t matter. That’s not the point. The point is to move the culture towards more openness so that scientific information is more generally available.

    • Sure. But it’s a collective action problem. You’re trying to get a whole bunch of people to all do something that, individually, they apparently have little reason to do *unless everybody else is already doing it*.

      One way (among others) to try to overcome that collective action problem is to argue that people *actually* have much stronger incentives to post preprints than they think. It’ll publicize your work! You’ll get more attention/readers/citations/influence/ponies than you would otherwise! Etc. The problem with that argument is that in most individual cases, it’s not true.

      • Totally agree with this assessment. Personally, I’m also in favor of casting my ‘vote’ by doing things that won’t have any immediate, tangible, or even necessarily eventual payoff — and encouraging others to do the same. (I’d consider it ‘service’, perhaps, in a perverse way…)

  5. Within a couple of hours of posting a preprint to bioRxiv, I was contacted by New Scientist and other media outlets. And it was tweeted a bunch, etc. So your mileage may vary.

    I did not get any comments on the manuscript from scientists, but I didn’t get many comments on the manuscript during review either. It was mostly ready to go.

  6. Hey Jeremy et al.,

    Just a quick note to say that I did read Earth Days several years ago–before kids–on the advice of friend and colleague Nate Sanders. Though I’m afraid the majority of my brain cells have since atrophied, and an actual synopsis or review is beyond me. But I do have a few recollections…

    I love reading about the history of our field, and so I really liked this aspect of the book. It was a mostly fun read and cool to read about folks like scientist-turned politician Barry Commoner from first hand knowledge. Was fun to read about colleagues we all know and love, like Bob Ricklefs, when he was just starting as a prof at Penn during the first Earth Day. And to put some real background and life behind some of the ideas and people that we tend to just make into caricatures of what they really said/thought.

    That said, I also recall a rather heavy load of ego and/or insecurity in the writing style that often detracted from the overall story. The author was clearly in the midst of a bunch of great minds and exciting science (and activism), but some bitterness seems to permeate aspects of the book.

    Nevertheless, on the whole, I recommend it for anyone interested in ecology. In general, I think we have a fair bit of ignorance of our history in the field of ecology, flitting about from one bandwagon to the next, from one tool to the next, and books like this are a good way to get ‘caught up’ on the past.

  7. Just a comment on the preprint thing: as a PhD student with not much published yet, preprint are valuable to create contacts. My first preprint helped me to introduce myself to experts in my field, even though I had not published yet. I do agree, however, that preprint in biology are not used as much as they could be.

  8. Re: LaTeX, I hate to say it, but I do think it’s the approximate answer to the problem described. However, in my field it is very common for publication venues to provide LaTeX templates (and Word templates, but frankly I’ve found it easier to make everything work with the LaTeX ones). I have no idea if this is done in other fields. But it makes using LaTeX a hell of a lot easier than trying to write a LaTeX document from scratch would be. If you have to write one from scratch I can see that being too much pain to be worth it.

  9. Pingback: An experiment with bioRxiv.org | David Zelený

Leave a Comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.