Also this week: tell me again when to use fixed vs. random effects, why
everything higher education costs so much these days, why you should distrust conclusions based on many small studies, and more.
Science might save my daughter. Don’t kill it. This is such a powerful, important piece by Alan Townsend. Read it, with tissues handy. (ht: Terry McGlynn)
Nobel Prize-winning psychologist Dan Kahneman now thinks that the entire field of social priming research went off the rails and that he himself was far too quick to accept and publicize its conclusions. Here’s his mea culpa, and diagnosis of what went wrong. tl;dr: you should be skeptical of a conclusion to which lots of small studies point, even if they come from different labs. That is, the high number of studies together with their unanimity is a reason to be suspicious of their conclusions. That’s because unanimity of published underpowered studies points to some combination of a severe file-drawer problem and severe p-hacking. This may have some implications for stereotype threat, a form of social priming on which we’ve posted in the past. Click through to Kahneman’s comments even if you don’t care about social psychology, because the issue is much more general than that. In many fields, including ecology, lots of conventional wisdom is based on many low-powered studies that all seem to point in the same general direction. Kahneman also provides a model example of a scientist saying “I was wrong”. He’s basically retracting an entire chapter of his influential bestseller Thinking, Fast and Slow–not an easy thing to do.
Long but interesting and very accessible read on “cost disease“: why the cost of certain things grows much faster than inflation for long periods. US examples of such things include higher education (which is why I’m linking to this), health care, and public transportation infrastructure. And here’s the thing: obvious candidate explanations like “those things are rising in quality”, the Baumol effect, and the sorts of explanations favored by political partisans of any stripe, don’t really fit the data. No doubt one could quibble with the discussion of this or that particular case, but the overall picture made me stop and think. (ht @noahpinion)
Unlearning descriptive statistics. Cogent argument that your choice of summary statistic should depend on whether you’re planning to do statistical inference about population parameters, vs. just trying to summarize the sample in an easily-interpretable way. (ht Small Pond Science. )
Semi-relatedly, Margaret Kosmala on dipping her toes into political activism.
An econometrician on fixed vs. random effects. Apparently, econometricians teach their students that:
You should use random effects when your variable of interest is orthogonal to the error term; if there is any doubt and you think your variable of interest is not orthogonal to the error term, use fixed effects…Random effects should really only be used when the variable of interest is (as good as) randomly assigned.
Brian’s contrasting view on fixed vs. random effects is here. Discuss.
I’m from rural Pennsylvania, so trust me when I tell you that this is the most rural Pennsylvania story ever. 🙂 What seems like a highly-improbable confluence of events actually is a highly-probable confluence of events once you condition on the fact that the confluence of events happened in rural Pennsylvania. I say this fondly, by the way.
And finally, I am going to stop taking writing advice from Brian and start taking it from the kid who made this poster:
(ht @kjhealy) 🙂