Also this week: Ammonite reviews are in, 0.61>0.61, Slate Star Codex update, modeling p-hacking, and more.Continue reading
The talk of the social science online-o-sphere this week is this long meaty polemic from Alvaro de Menand. Alvaro was a participant in Replication Markets, a DARPA program to estimate the replicability of social science research. “Replication” here refers to getting a statistically significant of the same sign as the original, using either the same data collection process and analysis on a different sample, or the same analysis on a similar but independent dataset. Participants in the replication market were volunteers who wagered on the replicability of 3000 studies from across the social sciences. A sample of which are going to be replicated to see who was right. But in the meantime, previous prediction markets have been shown to predict replicability in the social sciences, and so in the linked post Alvaro treats the replication market odds as accurate estimates of replicability.
And he’s appalled by the implications, because the estimates are very low on average. The mean is just a 54% replication probability. The distribution of estimates is bimodal with one of the modes centered on 30%. And when you break the results down by field (Gordon et al. 2020), there are entire fields that do quite badly. Psychology, marketing, management, and criminology are the worst. (Economics does the best, with sociology not too far behind.)
The hypothesized reasons for this are pretty interesting (turns out you can learn a lot by reading a bunch of papers from a bunch of fields…). Alvaro argues that lack of replicability is mostly not down to lack of statistical power, except perhaps when it comes to interaction effects. Nor does he think the main problem is political hackery masquerading as real research, except in a few narrow subfields. And he has interesting discussions of the typical research practices in various fields. As sociologist Keiran Healy pointed out on Twitter, the replication market participants basically seem to have identified a methodological gradient across fields. The more your field relies on small-sample experiments on undergrads to test hypotheses that are pulled out of thin air, the less replicable your field’s work is estimated to be. Alvaro also has interesting discussions of variation within fields.
At the end, he has some proposals to address matters, some of them quite radical (e.g., earmarking 60% of US federal research funding for preregistered studies).
I’m curious whether all this applies to ecology. What do you think? How replicable are ecological studies in your view, and what do you think are the sources of non-replication? Take the short poll below! I’ll summarize the answers in a future post.
Also this week: lessons for science communication from 1918, syllabi vs. terms of service, and more.Continue reading
I am seeking three new graduate student (M.Sc. and/or Ph.D.) to start sometime in 2021. Preferably Sept. 2021, but other start dates are possible. My lab is a good fit for students looking to do fundamental research in population and community ecology, particularly research combining theoretical modeling with experiments in model systems. Right now I’m doing a lot of work on spatial synchrony of population fluctuations, and on higher order interactions and species coexistence, but I have other irons in the fire too. See my lab website for more on what’s going on in my lab these days. And see here to learn more about my department and its graduate program.
2021 hopefully will be an exciting time to join my lab. My recent NSERC Discovery Grant renewal was very successful. An influx of new students will bring a lot of new ideas and energy into the lab. And Canada’s been doing a decent job of responding to the coronavirus pandemic, so hopefully by 2021 my lab will have safely returned to something resembling pre-pandemic normalcy.
If you’re interested in joining my lab, please have a look at my letter to prospective grad students. It talks about my approaches to science and mentoring, and includes some questions I ask of all prospective students. If it seems like my lab might be a good fit for you, send me an email (firstname.lastname@example.org) with an introductory note, transcripts (unofficial is fine), and a cv. Looking forward to hearing from you!
In light of recent discussions on Twitter and elsewhere about how journals, and individual scientists, should or shouldn’t respond to PubPeer comments (particularly those alleging data anomalies), it seems timely to re-up this old post. It’s from 2014, but I think it holds up pretty well. Verging on prescient, actually!
That old post is also a useful reminder that post-publication reviewers who allege or hint at misconduct sometimes are wrong (and sometimes, aren’t clearly right or clearly wrong). Wrong and debatable allegations can do real damage, especially when there’s no agreed formal procedure for handling them. Anecdotally, it seems to me that a lot of public discussion about how to discover and address cases of potential scientific misconduct is motivated by cases in which misconduct was eventually shown beyond any reasonable doubt. I think it’s worth also keeping other sorts of cases in mind. As discussed in the linked post, I don’t think there are any easy answers here.
Also this week: another retraction for Jonathan Pruitt, PubPeer vs. journals, and more
Allison Barner and others have been recalling Tony Ives’ 2013 MacArthur Award lecture, which sadly was never written up as an Ecology paper:
Totally with Allison and Chris on this–that was the best talk I’ve ever seen. So much thought-provoking content, and so unconventionally and compellingly presented. Whether you were there or not, you might enjoy the post I wrote about it at the time.
Anecdotally, it seems like the tradition of MacArthur Award winners writing up their talks as papers has fallen by the wayside. I live in hope that it’ll be revived (come on, do it Jon!). Several MacArthur Award papers are classics–they influenced many young researchers and fully deserved to.
…so I might as well turn my office into a movie studio. 🙂
Some opening bids:
- 1859, the publication year of Darwin’s Origin of Species. Much of ecology can’t be understood unless you know about evolution. And the biogeography in the Origin is pretty much on the money.
- 1972 saw the publication of MacArthur’s Geographical Ecology, synthesizing much of his massively influential work. 1972 also was the year Bob May published his famous result on stability and complexity in model ecosystems.
- 1976. May’s Nature paper on chaos in the discrete time logistic equation. Charnov’s marginal value theorem of optimal foraging. Stearn’s review of life history theory, arguably the most influential review paper in ecological history.
- 1977. Holt 1977 (apparent competition). Grubb 1977 (regeneration niche). Grime 1977 (CSR hypothesis). Brown & Kodric-Brown 1977 (rescue effect). Connell & Slayter 1977 (alternative modes of succession).
This should be a fun comment thread!
**Seriously, it was 1967.***
***No, YOU’RE wrong. It was 1967, dammit!****
****[puts fingers in ears, sings “Are You Experienced?” at the top of his lungs]