Also this week: the most embarrassing thing that can happen to a scholarly book author, reproducing the evolution of cooperation, and more.
Nobel Prize winning physicist and Santa Fe Institute co-founder Murray Gell-Mann has passed away. He was 89.
Trying–and failing–to reproduce Robert Axelrod’s famous Prisoner’s Dilemma tournament, which found that tit-for-tat was the winning strategy. It’s not clear precisely why Axelrod’s result doesn’t reproduce, but there are many reasonable possibilities and there’s no reason to think Axelrod did anything unethical. Note that Axelrod’s main qualitative result–cooperation emerges in the long run–does reproduce. And the literature on evolution of cooperation has since demonstrated many results in ways that don’t depend on the correctness of Axelrod’s result. So even if Axelrod’s precise results don’t reproduce, it’s probably more a curious footnote than anything that would alter the current state of the field.
Think back to your most embarrassing professional moment. Now imagine something much more embarrassing. You’re still not imagining something as embarrassing as what happened to Naomi Wolf. Whose entire book was refuted during a live BBC radio interview, in which the presenter informed her that she’d misunderstood a historical legal term. Here’s some additional background information about the topic from the interviewer. I have to say, Naomi Wolf handled it well (which doesn’t somehow erase the original mistake, of course, but is all she can do at this point). (UPDATE: having learned more about Naomi Wolf’s career-long history of serious errors, I no longer think she handled this well. Wolf’s reaction to having her errors pointed out is the reaction of someone who doesn’t see any connection between the facts and whatever larger “truth” she thinks she’s conveying, and so doesn’t particularly care about the facts. Facts are just window dressing to Naomi Wolf.)
Preregistration is no panacea for the “replication crisis” in psychology, according to an unreviewed preprint. The first 27 studies preregistered with the field’s top journal all deviated from their preregistered plans. Twenty-six of them failed to fully disclose the deviations. And most of the deviations were not because of unforeseeable circumstances out of the authors’ control. I was interested to see which types of deviations from the pre-registered plans were most common, and which were most commonly undisclosed. Deviations from the preregistered sample size, data exclusion criteria, and statistical model seem to be common. Deviations from the preregistered variables and direction of effect are much rarer–but are never disclosed when they occur. I’ll be curious to see if matters improve in future. Does handing out “badges” to preregistered studies that don’t, you know, actually do what they said they were going to do encourage better preregistration in future? Or is it counterproductive because it causes readers to trust those preregistered studies more than they should? Or both?
Is the declining population of rural areas in the US a statistical artifact? Or not? I may put this debate on my list of statistical vignettes for teaching intro biostats. Illustrates the importance of knowing exactly how the data were generated.
Should public intellectuals maintain a united front to achieve a political goal? This had me thinking back to Brian’s old post on whether scientists should maintain a united front to achieve political goals–even if it means suppressing data. (ht Marginal Revolution)
US public opinion on climate change and what should be done about it remains politically polarized–but it’s polarization around a shifting mean. Since at least the late ’90s, US public opinion has been moving steadily in the direction that climate change is a threat that requires action, save for a big reversal during the Great Recession.
“Godzilla, it seems, has been subject to a selective pressure 30 times greater than that of typical natural systems.” Wait, I thought Godzilla was a single individual, not a population of Godzillas.* Am I wrong? Because if I’m right, that link is describing growth or phenotypic plasticity rather than evolution. This actually bugs me a little. I’m all for using silly examples to illustrate real science; think for instance of John Lawton’s paper on the ecology of the Loch Ness monster. But even a silly example of evolution needs to be an example of evolution. Isn’t treating Godzilla’s changing appearance over the years as an example of evolution like treating Superman’s changing appearance over the years as an example of evolution?**
Settlers Journals of Catan. 🙂
*Yes, I am aware of this. Don’t @ me.
**Dynamic Ecology: come for the ecology, stay for the Superman references.