Also this week: how to generalize from case studies, how theoretical particle physics is like social psychology, and more. Not a ton of links this week, but the average thought-provokingness is very high.
Is the dilution effect a zombie idea? Or if not a zombie idea, an idea that’s been overconfidently generalized way beyond the few cases for which we have good evidence for it? And wherever you stand on that, shouldn’t you be worried that there’s such vociferous disagreement about what the empirical evidence on the dilution effect means? Related: our old post on questioning the value of biodiversity. See also this book chapter (UPDATE: link fixed) showing that authors of experimental studies of the effects of habitat fragmentation on biodiversity systematically spin their abstracts so as to emphasize the results the authors hoped to find rather than what they actually found. See also Brian’s old post on why scientists should not be expected to present a united front when it comes to evaluating the empirical evidence relevant to political issues.
Just been alerted that some of the ecology graduate students at UC Davis put out an interesting quarterly newsletter. Well, sort of a newsletter–it’s actually mostly not Davis-specific, so maybe “mini-magazine” would be a better term? Both the content and presentation are pretty impressive, a lot of work clearly goes into it. It covers a range of topics in ecology and academia, overlapping a lot with the sort of stuff we cover here. Indeed, there’s a part of me that wonders if it wouldn’t work as well or better as a blog that allowed comments rather than as a typeset pdf that doesn’t. But if you’ve ever thought “I wish there was something like Dynamic Ecology, but written by grad students, and with nature photography instead of a comments section”, well, here it is. 🙂
Statistically inferring the locations of lost Bronze Age cities from a gravity model of trading records. The basic idea is to relate the amount of trade between cities to their geographic distances from one another, and then infer where the lost cities must have been given how much they traded with known cities. The inferences match (and put confidence intervals on) what historians have inferred using qualitative methods. And in cases where historians disagree, the statistical inference favors one historical conjecture over the alternatives. I’ve only read the abstract, but it sounds very cool. Good fodder for an intermediate or advanced stats course. (ht @noahpinion)
A case study of faculty hiring committees at one US research university finds that search committees tended to assume that women whose partners held academic or high-status jobs were not “moveable”, and so tended not to offer them positions. Committees rarely worried about the relationship status of male candidates and assumed they were moveable. Note: I’ve only read the abstract. I have little relevant experience on this, so can’t really comment regarding, e.g., how typical the results are. I’d be particularly interested to hear comments on this from folks who’ve sat on search committees, or who have direct experience with spousal hires, or with moving from one faculty position to another.
How can we use case studies of the history of science to learn how science in general is done or should be done? It’s not obvious that we can, because “enumerative induction”–just tallying up how many cases support one model of scientific practice vs. another–seems like a non-starter. But how else are you supposed to generalize from individual cases to a range of cases? Click through for the very interesting answer. I’m thinking about how the same answer might apply to generalization in ecology.
Sticking with interesting, accessible philosophy of science: how (a distorted version of early) Karl Popper killed particle physics. Or, why “researcher degrees of freedom” is a problem for theoretical physicists, not just social psychologists. That your theory is consistent with existing data and makes a testable prediction does not make it worth testing. Great post, accessible to non-physicists. Worth thinking about if/how it applies to ecology (I’m not sure that it does, but I’m mulling it over). Some choice lines, to give you the flavor and encourage you to click through:
If the only argument that speaks for your idea is that it’s compatible with present data and makes a testable prediction, that’s not enough. My idea that Trump will get shot is totally compatible with all we presently know. And it does make a testable prediction. But it will not enter the annals of science, and why is that? Because you can effortlessly produce some million similar prophecies…
…All you have to do then is twiddle the details so that your predictions are just about to become measureable in the next, say, 5 years. And if the predictions don’t work out, you’ll fiddle again.
There are so many of these made-up theories now that the chances any one of them is correct are basically zero…The quality criteria are incredibly low, getting lower by the day. It’s a race to the bottom…
This overproduction of worthless predictions is the theoreticians’ version of p-value hacking. To get away with it, you just never tell anyone how many models you tried that didn’t work as desired.