Lots of good stuff this week! C’mon, click all the links! Like you have anything better to do on a Friday afternoon.🙂
John Taylor discusses the need for replication in economics. He notes that way back in 1986, Dewald et al. published a devastating article in which they found that serious mistakes were common in empirical economics articles. In response, leading journals changed their policies to oblige authors to share data and code–but the policies were never enforced, and today a few lonely economists are still campaigning for change. Wonder what someone would find if they did a study like that of Dewald et al. today. I’ll probably have to keep wondering for a while, since even if data and code are shared, nobody has much incentive to try to replicate published analyses. Hence my proposal that grad students be assigned replication projects as part of their statistical course work.
Along the same lines, Retraction Watch suggests a Reproducibility Index for scientific journals. The idea is to grade journals on the fraction of their papers that stand up to scrutiny, for instance by calculating the percentage of citations to papers published in the journal that show replication vs. inability to reproduce results. Failures to replicate are infamous for not having much impact on the literature, even when they take the form of corrections or retractions. But perhaps people would pay more attention if failures to replicate were summarized in the form of an index? See the comment thread over there for much discussion of the feasibility and advisability of such an index.
And a few years ago, John Ioannidis and Thomas Trikalinos suggested a way to statistically test for excess statistically significant findings in research on any given topic. Lots of different factors–data dredging, selective reporting, data falsification, and many more–will result in an excess of statistically-significant results being reported, compared to the number that would be expected given the study designs used and the true effect size being studied. So if you have a bunch of studies of the same question (i.e. studies that could all be combined in the same meta-analysis), you can calculate the expected fraction that would give statistically-significant results under the null hypothesis of no bias towards statistically-significant findings, and compare that to the observed fraction. Of course, since many sources of bias in statistical significance tests will also bias the estimated effect sizes, you can repeat the analysis for a range of possible true effect sizes, to get a sense of how large the true effect would have to be to justify the observed frequency of statistically-significant results. The idea is related to but distinct from familiar tests for publication bias. The authors found numerous examples of significant bias towards significant results in the medical literature. Lots of ways you could go with this in future. For instance, you could ask if the excess of significant results tends to be greater for observational studies than for randomized controlled experiments. Note however that the approach is basically exploratory, and that there are many reasons why you might find an excess of statistically-significant results, some of them completely innocent (e.g., among-study heterogeneity in the true effect size).
Britt Koskella, who has guest-posted for us in the past, has now posted the reviews her latest paper received over at her own blog. She posted her responses to the reviews as well. I would think students in particular would be interested in reading this. Especially when you’re just starting out, you don’t have much first-hand experience with what peer review is like. Plus, it’s always challenging to be objective about one’s own work, and thus about the reviews of one’s own work. Reading the reviews someone else’s work has received increases your “sample size” for what peer review is like, and lets you experience the process as an objective outsider. Terry McGlynn at Small Pond Science also posts the reviews he receives. Terry suggests numerous reasons for doing it. I’ve previously expressed mild skepticism about some of those reasons. But I don’t see that publishing reviews will do any harm (Terry rightly downplays the risks, I think), so why not? (And see below for Meg’s comments)
Morgan Ernest on how to create a diverse speaker series. Short version: cast a wide net, be systematic and thoughtful rather than just inviting the first people who pop into your head, and be prepared to put in a lot of work (which will be well worth it).
A rare retraction in ecology.
How do you tell the difference between research that’s too specialized, and research that’s not specialized enough (“just following the crowd”)? Terry McGlynn, Zen Faulkes, and I all have posts on this. Mine is presented as a joke, but in fact I’m mostly not joking.
Amusing article on scientists eating their study organisms. I have an old post on this, with a very entertaining comment thread. Judging by that thread, most ecologists choose their study organisms by asking themselves “What system should I work in in case I get hungry in the field and want to eat one of my samples?”🙂 (HT Ed Yong)
Awesome gifs of science demonstrations. It’s a wonder any kids ever go into anything besides chemistry after watching stuff like this.🙂 (HT Ed Yong)
This kind of thing is why I’m a lab ecologist.🙂 (HT Ed Yong)
And finally, sex tips from nature, courtesy of The Onion. “Be the boss and use your pincers to drag your mate into a nitrogen-rich log.”🙂
For those with NSF grants, this post at DEBrief regarding the new annual reports format will be really helpful. Seriously. Read it. I wish it was written before I battled through the new format, and it sounds like even more changes (excuse me, ‘improvements’) are in store. Filling out my reports this year was really frustrating and time-consuming. Hopefully that will get better with experience with the new system.
Also related to NSF: NSF will be moving to Alexandria, VA in 2017. (ht: Morgan Ernest)
Britt Koskella has posted reviews of her new paper in Current Biology, along with her responses. She was inspired by Rosie Redfield, and this is something others (including Terry McGlynn) have called for. Interestingly, on twitter, Terry said this is something he would only do post-tenure. Timothée Poisot has an interesting post in reply to Britt’s. Britt and Carl Boettinger have interesting comments on that post, too. Lots to think about!
Two entries in the “less-is-more for science writing” competition: Is this the best scientific abstract ever? It’s certainly to the point! (ht @JenLucPiquant via twitter) And, on a similar theme, this is a fantastic taxonomic note. As Morgan Jackson said on twitter, “The title is the data, the discussion is 1 sentence, and the acknowledgements are hilarious.”
Over at Tenure, She Wrote, there’s a post by drmsscientist on how enjoyable it’s been to be a first year faculty member. As she points out, there’s so much negative info out there about challenges for women in academia. (It does seem to be a bit of a theme for my Friday links!) But, as I’ve said in an earlier post, sometimes you need to ignore that and focus on the people who have done it (or are doing it).
That old post of mine that I just linked to (I’ll link to it again!) was motivated by some dispiriting posts on ecolog. Sadly, there’s been a bit of a renewal of that. This started after a post announcing Career-Life Balance supplements from NSF to support someone to keep a project moving along while a postdoc is on family leave. On twitter, there were lots of folks (myself included) saying that this could have really helped out X months/years ago. I think this is huge, and that it’s great that NSF is doing this. Unfortunately, the initial post to ecolog led to a reply saying this is “institutionalized discrimination” against singles. I’ve been wanting to write a post in reply but haven’t had time, so I’ll link to this post by CackleofRad instead. In my opinion, there’s been a huge difference in the responses this time. Compared to the Clara B Jones kerfuffle that motivated my earlier post, this time the responses have been much more supportive of women and parents in science. A nice change!
And, finally, here’s an article on why there are still so few women who are public intellectuals. It talks about women being more likely to decline requests to comment on something because it’s outside one’s area (which made me cringe a bit, because I’ve done this) and also talks about the potential for blogs to be platforms for women scholars. (ht: Jacquelyn Gill via twitter)
Hoisted from the comments
Meg’s post earlier this week on system envy and experimental failures has a very nice comment thread, kicked off by this wonderful comment from artist and lab tech Nancy Lowe on how scientists and artists present their results in different ways. And I make fun of Meg’s study organism, so there’s that.🙂
In case you missed it, the comment thread on Britt Koskella’s old guest post on microcosm experiments is really great. Includes discussion of the different reasons one might do microcosm experiments (Britt and I do them for different reasons, for instance), why microcosm experiments seem to be more controversial in ecology than in evolution, what it means to “rig” an experiment and why that’s not always a bad thing, and more.