Also this week: history is pseudoreplicated, RIP social psychology, scientific short stories, Darwin vs. snow, Jeremy shares his Ghostbusters: Afterlife anecdote, and MOAR! Grab a coffee and settle in; lots of good stuff this week!
Joscha Legewie, first author of a widely-publicized study of the effect of police violence on the health of black infants, is asking for the paper to be retracted because of data coding errors that completely invalidate the findings. The coding errors were first pointed out by readers who looked closely at the openly-shared data. Mistakes happen in science (see also), and Joscha Legewie deserves a lot of praise for doing the right thing so quickly and openly once the errors were pointed out. This episode illustrates one value of data sharing, at least for high profile papers that attract a lot of attention from other scientists and the general public. Although on the other hand it obviously would’ve been much better if the errors had been caught before the paper had been published. Extensive research shows that retracted studies continue to be cited for many years (for instance). Case in point: Legewie’s tweet announcing the error has been retweeted much less often than the tweet announcing the original paper, especially after you account for all the tweets and retweets by others about the original paper. (And no, the difference isn’t going to go away as time passes) Which is why pre-publication peer review remains so important, even though it can’t possibly catch all errors in all papers.
Very interesting piece on how archaeologists and historians are marshaling comparative data to test whether the “Axial Age” was a thing. From the perspective of an ecologist, the data might sound quite limited and crude. But all you do is the best you can with what you’ve got. The approach described in the linked article certainly sounds like an improvement over previous approaches, though I know nothing beyond what’s in the article.
Sticking with the topic of “data and history”: you know all those cool papers showing that some long-ago historical variable statistically predicts some feature of modern life (e.g., historical genetic diversity predicts modern income)? They’re all very pseudoreplicated. As you can demonstrate by replacing either the dependent or independent variable with spatially autocorrelated noise and finding that the statistical prediction remains just as significant (or even gets more significant!) Brian has an old post asking whether spatial autocorrelation is “friend or “foe”; this is a case where it’s clearly the latter. Nice examples here for use in teaching.
Check out this very interesting and amusing commentary from Matt Levine on green bonds. A green bond is a government bond that you pay a bit more for than you would for a regular ol’ government bond. In exchange, the government promises to use the money to fund green projects (e.g., renewable energy infrastructure) rather than for any ol’ governmental purpose. But there are some liquidity-related technical problems with green bonds, which the Danish government will solve by selling the bond and the promise to spend the money on green projects as two separate things. This makes sense, apparently. If you enjoyed trying to puzzle out the logic of rhino bonds, you will also enjoy puzzling out the logic of green bonds being sold with some assembly required, like Lego kits.
Sticking with “spending money to achieve green goals”, I wasn’t aware that it’s often illegal in the US to buy the rights to natural resources and then not exploit them (e.g., buying the right to drill for oil on a parcel of land for the purpose of leaving the oil in the ground).
So maybe instead of creatively spending money to achieve green goals, what about creative use of the legal system? Matt Levine comments on the New York Attorney General’s failed lawsuit against Exxon for purported securities fraud related to how Exxon accounts for carbon regulations that don’t yet exist but might in future. I confess this feels like the right outcome to me. I know you probably think Exxon does some bad stuff, and I agree. But it was always odd to think that Exxon’s shareholders were among the victims of that bad stuff.
This has long since gone from surprising to expected, I think: according to this as-yet-unreviewed preprint, yet another classic result from social psychology bites the dust in a massive pre-registered replication. The Data Colada folks just reported a pre-registered replication of another social psychology experiment that failed to find even a hint of the originally reported effect. And I think I missed this at the time, but early this year a big preregistered stereotype threat experiment failed to find any effect. Those are just three of many recent examples that could be given. For that reason, it’s not just me who now suspects that social psychology was wrong about, well, everything. Nature has a big news feature this week asking whether social psychologists can salvage anything of lasting value from the wreckage. Well, besides a cautionary tale about how entire subfields of science can go completely off the rails. That Nature piece quotes a couple of social psychologists who thinks that some priming effects will prove replicable in some some subgroups of people under some conditions, but doesn’t note that a bunch of preregistered replications have already thrown cold water on that possibility (see also). As for the notion, also from the Nature piece, that we’ll find replicable priming effects if only we put a small sample size of people in fMRI machines while priming them, I’ll just leave this here. That Nature piece mentions that many fewer new social psychology studies are being done, as opposed to replications of old ones. Which sounds like the right choice to me. So now I’m curious how, and how fast, psychology textbooks and the content of undergrad psychology curricula are being updated, as an entire subfield starts to wither away.
Controlling for journal, subfield, and other variables, social science papers by men and women are cited equally often. But note that the linked paper doesn’t address the systemic forces that shape choice of subfield to work in, and choice of journal to submit to (not that it was intended to, of course). Other research addresses those systemic forces.
The most recent Nobel Prize in Economics went to developmental economists who pioneered the use of randomized controlled field experiments in this field. So, has this approach advanced the field or set it back? I was interested in this debate because it split some economists who usually agree with one another. Even if you don’t care about developmental economics, the issue here is a big one that comes up in many fields. Does progress come from low-risk, incremental work on small, tractable questions? Or does it come from risky moonshots on big, intractable questions? I think it’s very hard to answer that question in general. It’s easy to cite cases in which each approach has succeeded, and failed. But it’s hard to identify general principles that distinguish each approach’s successes and failures.
Ditching the SAT and ACT likely would reduce the fairness of US college and university admissions. Good thread from Susan Dynarski. Here’s our old related discussion in the context of graduate admissions.
My former honors student Geoff Osgood has started a very creative blog: he turns scientific papers into short stories. Check it out!
Sam Perrin asked British Ecological Society meeting attendees for their thoughts on Brexit.
I know nobody cares about this but me, but the trailer for the new Ghostbusters movie is full of shots of Drumheller and the surrounding badlands, just east of Calgary. My family stumbled across it filming back in the summer, on a day trip to the Royal Tyrrell Museum. The film crew wasn’t allowed to tell anyone what movie it was, but the car that they were filming tearing through the center of town was, uh, kind of a giveaway. I hope there’s a scene in WHIFS Flapjack House. And if the climax of this movie doesn’t involve eldritch forces bringing the World’s Largest Dinosaur to life, I’m going to be disappointed.
My son asked me the origin of the word “oops”, and Google led me to this informed speculation that it might be related to…[wait for it!]…equine disease ecology. I want Stephen Heard to read this and tell me if it sounds plausible enough to take seriously. Because it’s my offhand impression that the world is filled with speculation about word origins.
Using multivariate statistics to determine the difference between muffins and cupcakes. I wish I’d seen this two weeks ago, when I was teaching PCA vs. linear discriminant analysis. My chosen example was much less delicious. 🙂 (ht an excellent correspondent)
Sticking with the same topic: the phylogeny of baked goods. The comment thread is [chef’s kiss gif]. 🙂 (ht the same excellent correspondent)
And finally, Pachelbel’s canon on train horns. 🙂