Because Friday is reserved for a special seasonal link. This week: the wisdom of the researcher crowd (or lack thereof), should we give up on psychology, end of a blogging era, fruitcake microbiology, and more.
Wide-ranging interview with Nobel Prize winning economist Esther Duflo, a pioneer of randomized controlled field experiments in developmental economics. Touches on issues of broad interest to non-economists, such as whether a field’s progress is accelerated or decelerated by centrally-directed experiments conducted by large coordinated international teams. Related old post of mine, discussing this issue in an ecological context. One very interesting hybrid model is distributed experiments like NutNet: a centrally-coordinated experiment on which individuals are free to piggyback their own related studies. Indeed, it sounds like one of the big field experiments Esther Duflo and her colleagues founded has now grown into this hybrid model.
The golden age of economics blogging ends with the retirement of one of its greatest bloggers. For many years, Mark Thoma’s Economist’s View has been the daily curated compendium of the huge and active economics blogosphere. I don’t know of any equivalent in any other field. Economist’s View filled a need and filled it well, and so built a huge audience. And that audience got pointed not just to the latest posts from Paul Krugman, but to the latest posts from many other economists who weren’t yet well-known and didn’t already have an audience of their own.
Gender differences in the use of positive valence words by medical and biomedical authors. Men use certain positive words (e.g., “novel”) in their titles and abstracts more often than women do, and the gender gap is largest in the highest-impact clinical journals.
The latest on researcher degrees of freedom. Previously, Erik Uhlmann and colleagues gave a bunch of research teams the same dataset and asked them to use it to answer the same question. The answers varied appreciably, an illustration of “researcher degrees of freedom”, but at least they all supported the same qualitative conclusion. Now Uhlmann and colleagues have gone one step further, giving a bunch of psychology research teams the same hypotheses to test, by conducting whatever experiments they wanted on the same pool of subjects. The answers varied radically, with about half the teams finding statistically-significant effects supporting any given hypothesis, and the other half finding statistically-significant effects rejecting it.
Which makes me inclined to listen to Tal Yarkoni’s argument that psychologists should give up on psychology as a quantitative science. If the sequence “hypothesis development–>study design–>statistical analysis–>inference about the truth of the hypothesis” is broken at every step, then there’s not much point to massive preregistered replications of the last two steps. At least, not unless you can also fix the earlier steps. Tal Yarkoni argues that psychologists don’t really know what they’re even measuring, so they should quit trying to measure stuff and do hypothesis testing, and go back to being a qualitative field. I’m not sure I’d go quite that far. But there might be an argument for psychologists in at least some subfields to (temporarily?) give up on doing anything besides descriptive research that involves as few abstract unmeasurable concepts as possible. Or not, I dunno? I only know what I read on blogs, so you probably shouldn’t take my opinion on this too seriously.
The Far Side is back–sort of. A new website is going to rerun old strips, but Gary Larson will occasionally slip something new in.
The microbiology (or lack thereof) of 141 year old fruitcake.