Also this week: Ammonite reviews are in, 0.61>0.61, Slate Star Codex update, modeling p-hacking, and more.
Using big data and computer models to sway elections and predict riots–in the 1960s. Fascinating.
A little while back I linked to a preprint (now in press) showing that, in economics, the frequency of p-hacking (and/or selective reporting of p-values) varies depending on study methodology and is lowest in randomized controlled experiments. Uri Simonsohn was one of the reviewers; here’s his blog post explaining why he doesn’t buy the paper’s core conclusion. Speculative but interesting discussion, arguing that only some forms of p-hacking will leave a signature in the distribution of published p-values. Scroll down for the authors’ replies, and Uri’s replies to their replies. I come down more on the side of the authors on all this, but without any great confidence. It might be helpful to have similar studies in other fields, so we had a larger empirical evidence base to go on.
A contrarian, data-based case for opening college campuses as normal during the pandemic. From an academic sociologist who broadly shares the political leanings and values of the median US academic, and so meant seriously rather than as trolling. I don’t entirely agree with it, but I don’t think it can be dismissed out of hand, either.
The reviews are in: Ammonite, inspired by the life of paleontologist Mary Anning, is really good.
This seems bad:
Comedy Wildlife Photography Awards finalists. #17 is great.
Slate Star Codex situation update.
In a recent post, I showed that the worst serial scientific fraudsters usually lose their jobs, and many suffer other serious penalties. But what happens to more garden-variety scientific fraudsters? Turns out there’s a paper on that. Galbraith (2017) looked up the fates of 284 researchers who were found guilty of misconduct by the US ORI. 47% of them continued to do some sort of research after their penalties, and a substantial fraction even got post-misconduct federal research grants. Question: do you think this shows that penalties for “garden variety” scientific misconduct are too light? And if so, how severe do you think they should be? I’m not sure myself how I’d answer. Part of me would like to see automatic lifetime public funding bans, automatic lifetime publication bans, and job loss with no hope of ever being rehired by any research institution, even for a first offense. But then another part of me thinks wait, hang on Jeremy, outside the context of scientific misconduct, you don’t think that most other misdeeds of comparable seriousness should carry draconian penalties for a first offense. So I dunno; what do you think?