Also this week: the most embarrassing thing that can happen to a scholarly book author, reproducing the evolution of cooperation, and more.
Nobel Prize winning physicist and Santa Fe Institute co-founder Murray Gell-Mann has passed away. He was 89.
Trying–and failing–to reproduce Robert Axelrod’s famous Prisoner’s Dilemma tournament, which found that tit-for-tat was the winning strategy. It’s not clear precisely why Axelrod’s result doesn’t reproduce, but there are many reasonable possibilities and there’s no reason to think Axelrod did anything unethical. Note that Axelrod’s main qualitative result–cooperation emerges in the long run–does reproduce. And the literature on evolution of cooperation has since demonstrated many results in ways that don’t depend on the correctness of Axelrod’s result. So even if Axelrod’s precise results don’t reproduce, it’s probably more a curious footnote than anything that would alter the current state of the field.
Think back to your most embarrassing professional moment. Now imagine something much more embarrassing. You’re still not imagining something as embarrassing as what happened to Naomi Wolf. Whose entire book was refuted during a live BBC radio interview, in which the presenter informed her that she’d misunderstood a historical legal term. Here’s some additional background information about the topic from the interviewer. I have to say, Naomi Wolf handled it well (which doesn’t somehow erase the original mistake, of course, but is all she can do at this point). (UPDATE: having learned more about Naomi Wolf’s career-long history of serious errors, I no longer think she handled this well. Wolf’s reaction to having her errors pointed out is the reaction of someone who doesn’t see any connection between the facts and whatever larger “truth” she thinks she’s conveying, and so doesn’t particularly care about the facts. Facts are just window dressing to Naomi Wolf.)
Preregistration is no panacea for the “replication crisis” in psychology, according to an unreviewed preprint. The first 27 studies preregistered with the field’s top journal all deviated from their preregistered plans. Twenty-six of them failed to fully disclose the deviations. And most of the deviations were not because of unforeseeable circumstances out of the authors’ control. I was interested to see which types of deviations from the pre-registered plans were most common, and which were most commonly undisclosed. Deviations from the preregistered sample size, data exclusion criteria, and statistical model seem to be common. Deviations from the preregistered variables and direction of effect are much rarer–but are never disclosed when they occur. I’ll be curious to see if matters improve in future. Does handing out “badges” to preregistered studies that don’t, you know, actually do what they said they were going to do encourage better preregistration in future? Or is it counterproductive because it causes readers to trust those preregistered studies more than they should? Or both?
Is the declining population of rural areas in the US a statistical artifact? Or not? I may put this debate on my list of statistical vignettes for teaching intro biostats. Illustrates the importance of knowing exactly how the data were generated.
Should public intellectuals maintain a united front to achieve a political goal? This had me thinking back to Brian’s old post on whether scientists should maintain a united front to achieve political goals–even if it means suppressing data. (ht Marginal Revolution)
US public opinion on climate change and what should be done about it remains politically polarized–but it’s polarization around a shifting mean. Since at least the late ’90s, US public opinion has been moving steadily in the direction that climate change is a threat that requires action, save for a big reversal during the Great Recession.
“Godzilla, it seems, has been subject to a selective pressure 30 times greater than that of typical natural systems.” Wait, I thought Godzilla was a single individual, not a population of Godzillas.* Am I wrong? Because if I’m right, that link is describing growth or phenotypic plasticity rather than evolution. This actually bugs me a little. I’m all for using silly examples to illustrate real science; think for instance of John Lawton’s paper on the ecology of the Loch Ness monster. But even a silly example of evolution needs to be an example of evolution. Isn’t treating Godzilla’s changing appearance over the years as an example of evolution like treating Superman’s changing appearance over the years as an example of evolution?**
Settlers Journals of Catan. 🙂
*Yes, I am aware of this. Don’t @ me.
**Dynamic Ecology: come for the ecology, stay for the Superman references.
“Is the declining population of rural areas in the US a statistical artifact?”
From the map in the post article, looks like >90% of “rural” areas that have become “urban” have been swept up in major urban areas like Seattle-Tacoma, Portland-Vancouver, SF, LA etc. Urban areas have to grow somewhere. They do so by subsuming adjacent rural areas. And yes, not surprisingly, the mining, logging and farming communities that once formed the backbone of “rural” America are emptying out – that is, rural America really is in decline. 🙂
Oddly enough I have done a paper (via a SESYNC working group) on EXACTLY this topic (Sparsely settled forests …) https://www.fs.fed.us/pnw/pubs/journals/pnw_2018_van_berkel001.pdf
Short answer – as Jim says the urban-facing edges are getting swallowed, but that is still a pretty small fraction of all rural areas, and the non-swalllowed places are not changing (not becoming more or less dense).
So rural America is slowly shrinking at the edges, but it is not hollowing out.
I love blogging serendipity. I can just link to some random thing I found interesting, and then it turns out one of our readers–or one my fellow bloggers–is an expert on it and can teach me about it! 🙂
Towards using this in a stats class, I think it is a good example of how simple compare the mean statistics can sometimes be misleading (especially when the categories are fungible) and a full spatiotemporal view can paint a much richer and more accurate picture.
It was an aha moment for us when we actually put the increasing population pixels on a map instead of just counting how many of each type of pixel there was.
That is odd! Very interesting paper, though! Going back to yesterday’s post, I wish I had time to read papers all day every day and still get everything else done!
Yes, it’s a good point early in your paper that there are recreational areas that are growing, offsetting other areas that are shrinking. Just the same, if you go by employment and income and you take out the university towns and the sprawl it seems unlikely that rural America is doing “just fine” as Mother Jones claims. Recreation jobs pay poorly. Mining and logging jobs pay much better 🙂
In light of this, I’m now less impressed with how Naomi Wolf handled having her entire book refuted live on the radio:
Seems like she handled it well because she doesn’t care about facts. She forms in advance an unshakeable belief in some larger truth, so that in her mind the facts always fit it.