Friday links: over-cited papers, #overlyhonestemail, and more

Also this week: how to increase graduation rates of students in financial need, Plos One’s surprisingly (?) high rejection rate, and more.

From Jeremy:

The fascinating history of research on “canals” on Mars–and how it began with a translation error.

Andrew Hendry on his most over-cited papers. He’s spot-on re: “fill in the box” citation inflation.

No detectable association between whether a randomized controlled trial was preregistered and whether it detected a positive treatment effect. (ht Retraction Watch)

An argument that you shouldn’t use funnel plots to diagnose publication bias, because funnel plots assume that study sample sizes are uncorrelated with the (true) effect size being studied. To which, yes, that’s an assumption of funnel plots. But unlike the authors of the linked post I think that assumption usually is fine. In my experience sample sizes usually are determined by considerations independent of the true effect size (in particular, the investigator’s resources). (ht Stephen Heard, via Twitter)

Wait, Plos One has a 50% rejection rate?! Even though they only reject on grounds of technical unsoundness? Anyone else surprised by this? Man, I sure hope that most other journals receive a lower rate of technically-unsound submissions than that (and I suspect they do…) Because the thought that 50% of all submissions are so technically flawed as to be unpublishable is pretty depressing.

From Meg:

Sorry for the delayed response, which includes various reasons for not replying, including “Sorry for the delay! I put off answering your e-mail until I had an even more tedious task that I wanted to avoid. Thanks!”

The NYTimes had a piece on a potential way to increase student graduation rates: giving small grants to needy students who are close to graduation. The piece focuses on Georgia State, which has done an impressive job of increasing graduation rates and erasing differences in graduation rates between white and minority students. The small grant program at Georgia State sounds really interesting, and I’m glad there are studies underway to see how much of an impact the small grants are having.

9 thoughts on “Friday links: over-cited papers, #overlyhonestemail, and more

  1. I have seen some pretty bad english in some Plos One submissions (non-native speakers that didn’t find a native or good editor before submission) — that could contribute to the reason of such high rejection.

  2. An alternative model of the 50% rejection rate of Plos One is that reviewers are being human and recommending rejection on criteria other than technical errors

  3. On the PLOS One link, I”ll go back to my comments on the post about gatekeeping vs peer review – “technically unsound” is hardly an objective nor black-and-white property. Most papers at most journals are asked to add a new analysis, report additional numbers, recalibrate the claim made vs the evidence. Are these technically unsound? If so does PLOS One reject them instead of request revisions? And if they request revisions, why reject other papers up front as “technically unsound”?And given that reviewers regularly disagree about whether, e.g., a certain statistical choice was appropriate or not, how objective can this be?

    I personally would respect PLOS One more if they just said “we only reject really bad papers”. The notion that there is some objective obvious absolute measure of technically unsound is a canard

    • I remember people making a big deal over the stats on the number of papers rejected by PLoS ONE that go on to be published elsewhere. I can’t find an official stat right now, but this blog post says it’s about half:
      http://occamstypewriter.org/scurry/2012/04/01/plos1-public-library-of-sloppiness/

      If you believe that PLoS ONE only rejects papers that are truly unpublishable, that number is alarming because it suggests a lot of shoddy papers are being published. But, of course, it could also be that PLoS ONE is (erroneously) rejecting papers that are valuable contributions. I had a paper rejected by them that we went on to publish elsewhere that is doing just fine, citation-wise. So, I’ll let you guess which camp I fall in (though, of course, it’s probably a bit of both).

      • ” I had a paper rejected by them that we went on to publish elsewhere that is doing just fine, citation-wise. So, I’ll let you guess which camp I fall in”

        Add me to this list.

  4. On the 50% rejection rate, another way to achieve this is that instead of accepting a paper with major revision, you reject it and encourage resubmission thus the revised paper is counted as new submission and the rejection rate is inflated.

    • You are absolutely right that “reject with invitation to resubmit” is a game some editors play. Aside from manipulating reject rates, “time from first submission to publication” is a key statistic for journals. Quite deceivingly, “first submission” is retained when a paper gets major revision, but “first submission” resets when a paper is rejected with invitation to resubmit.

Leave a Comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s