Also this week: a blogging anniversary, betting on replication, Shakespeare vs. dead animals, Brian and Jeremy have a link fight, and more. Also terrible philosophy puns.
From Brian (!):
Does which countries whose researchers you coauthor papers with affect the impact factor of the journal you get in? Apparently yes: in this piece from Emilio Bruna.
In the always entertaining and provoking Ecological Rants blog, there is a quote from Thomas Piketty’s book (setting the economic world on fire in the topic of income inequality for its careful empirical compilation of historical data). The quote is pretty harsh about economists’ obsession with little toy mathematical models that don’t inform about the real world. Krebs argues this critique applies to ecology as well (and cites no less than Joel Cohen one of the great theoretical ecologists who regularly chides ecologists for their physics envy). While I am an advocate for more math education in biology, I have to confess a certain sympathy with the quote. We’re so busy obsessing with equilibrium math models and small scale manipulative experiments we’re missing a lot of the story that is sitting in front of us in the massive amounts of data that have been and could be assembled. (There’s a controversial statement to make you sit up on a Friday)
Following up on my post about NSF’s declining acceptance rates there is a well argued blog by coastalpathogens suggesting we should just revert to a lottery system (one of my suggestions but not one that received a lot of votes in the poll).
The Chronicle of Higher Education had an article on increasing scrutiny of some NSF grants by Congressional Republicans (subscription required).
Link war! Brian, I’ll see your Thomas Piketty quote, and raise you Paul Krugman. Krugman’s long advocated the value of deliberately simplified toy models as essential for explaining important real-world data, making predictions, and guiding policy. See this wonderful essay on “accidental theorists” (and why it’s better to be a non-accidental theorist), this equally-wonderful essay on how badly both economists and evolutionary biologists go wrong when they ignore “simple” mathematical models, and this one in which Krugman explains his favorite toy model and how it let him make several non-obvious and very successful predictions about the Great Recession. Oh, and as important as Piketty’s empirical work is, it’s worth noting that even very smart and sympathetic readers have had a hard time figuring out what his implicit model is. If your model’s not explicit (and if you don’t care much for doing experiments), then your big data might as well be pig data. While I’m at it, I’ll raise you R. A. Fisher too.*
Statistician Andrew Gelman has been blogging for 10 years. I was interested to read his comments that there used to be more back-and-forth among blogs 10 years ago, and that these days that only happens in economics. I share the impression that economics is the only field that has a blogosphere. I also share Andrew’s view that Twitter is no substitute for blogs. Twitter has its uses. But “in depth conversation and open-ended exploration of ideas” is not one of them.
Speaking of Andrew Gelman, he passes on a link to a new preprint on the distribution of 50,000 published p-values in three top economics journals from 2005-2011. I’ve skimmed it, it seems like a pretty careful study, which avoids at least some of the problems of similar studies I’ve linked to in the past. The distribution has an obvious trough for marginally non-significant p-values, and an obvious bump for just barely-significant p-values. The authors argue that’s evidence not just of publication bias, but of p-hacking (e.g., choosing whichever of a set of alternative plausible model specifications gives you a significant result). They estimate that 10-20% of marginally non-significant tests are p-hacked into significance. The shape of the distribution is invariant to all sorts of factors–the average age of the authors, were any of the authors very senior, was a research assistant involved in the research, was the result a “main” result, were the authors testing a theoretical model, were the data and/or code publicly available, were the data from lab experiments, and more.
One more from Gelman: You can now bet real money on whether a bunch of replication attempts in psychology will pan out. I think it would be really fun, and very useful, to have something like this in ecology.
Most tenure-track jobs do not have 300+ applicants (and even the few that do tend to have an unusually-high proportion of obviously-uncompetitive applicants).
Speaking of tenure-track job searches: soil ecologist Thea Whitman with a long post on what it was like to interview (successfully!) for a tenure-track job. Go read it, it’s full of win.
*I’m guessing that Brian saw this response from me coming from 10 miles away, but I figure he (and y’all) would have been disappointed if I didn’t actually follow through and provide it. My
boring predictability clockwork reliability is one of my most endearing features. That, and my refusal to to take second place to any ecologist when it comes to making half-baked analogies with economics. [looks over at Meg, sees her rolling her eyes, coughs awkwardly] 🙂 In seriousness, I actually do see what Brian means and probably don’t disagree with him that much here. And for what it’s worth, I think current trends in ecology are mostly running in the direction Brian would like to see them run (e.g., away from MacArthur-style toy models of a single process.)