Also this week: citations as a game of “telephone”, a paper by Vincent Van Gogh (sort of), the false consensus effect, and much more. Lots of great stuff this week! Including a man who has either too much or too little time on his hands depending on your point of view (no, not me).
Jeff Ollerton has a clever post on whether G.Aad – a lead author on the Large Hadron Collider papes with an h-index over 20 really exists (or is a clever pun). Closer to our own field, has anybody ever met Dr M.V. Van? Because he very conveniently makes this paper on ESS (Evolutionarily Stable Strategies) authored by Vincent, Van, Goh. I happened to know Tom Vincent personally (sadly passed away) and Dr Goh has made numerous contributions in mathematical ecology, but somehow, despite being on the same campus for 8 years, I never managed to bump into Dr MV Van…🙂
With a bit of searching here, you can find “story behind the paper”-type commentaries on many classic ecology papers from before 1990. I linked to this in an update on a recent post, but wanted to highlight it again because most readers likely missed the update. (ht Ric Charnov, via the comments)
Interesting news piece in Science this week on ongoing attempts in social psychology to replicate previously-published experiments. I found this most interesting as an illustration of changing norms and culture clashes (see this old post). Some of the researchers who work wasn’t replicated feel that they’re being singled out and persecuted. And it’s not hard to see why, even if you don’t agree. The replicators are by their own admission not focusing their replication efforts at randomly-chosen studies, but rather on prominent, easily-replicated studies on specific topics (apparently, just as post-publication review is mostly for the scientific 1%, so are formal replication attempts). The replicators seem not to have made much effort to involve the authors of the original studies (UPDATE #2: As a commenter points out, my phrasing here is unclear. The original authors got to review the replication proposals, but weren’t further involved). And one of the replicators wrote a blog post in which he described the failed replications as “an epic fail as my 10 year old would say.” I agree with Daniel Kahneman’s comments in which he calls for a “replication etiquette” that includes good-faith efforts to involve the original authors. I think we’re going to need such an etiquette to develop if these sorts of replication attempts are going to come to be universally seen as a normal part of science. (Of course, just because you feel with some justification that your own work is being singled out for scrutiny doesn’t mean your work is right. And if other people decide not to fund your work or publish your papers because they think your results are wrong, well, there’s nothing unfair about that, and it’s not persecution. That’s just the way science works.)
UPDATE: Via Small Pond Science, news that the Association of Tropical Biology wants people to try to reproduce classic tropical ecology experiments. Apparently their EiC, Emilio Bruna, was inspired by the reproducibility efforts in social psychology. Wow! Definitely worth keeping an eye on. FWIW, I suspect that Emilio’s not the only EiC who would be happy, indeed eager, to publish a paper trying to reproduce some classic ecological experiment–no matter how the results came out. Especially if the sample size was large.
I’m a bit late to this, sorry: evolutionary biologist and blogger Pleuni Pennings just got a faculty position, at San Francisco State University (congratulations!) She’s excited, here’s why. Perhaps of particular interest to those of you who want a job involving lots of research, and who mistakenly think that such jobs exist only at big research universities.
Continuing the theme of “old posts from Pleuni Pennings that I failed to notice until just now,” here are her 11 things to look for when choosing a postdoc. One quibble: she suggests you should start trying to carve out your own “niche” a couple of years into your postdoc, whereas I’d say it’s never to early to start doing that. Even grad students can start doing it.
I need to look at Charles Goodnight’s blog more, he’s really good at thinking out loud about interesting scientific topics. Here’s a great post talking about the challenges of making Sewell Wright’s famous “adaptive landscape” metaphor more concrete. And I love the final line: “[I]f this essay sounds a bit confused, it is because I am also confused by this topic.” (even though the essay didn’t sound at all confused to me!) And here’s a slightly older one digging into serious technical issues in how to interpret models of multi-level selection, that manages to work in a reference to the movie Clueless. Great stuff, even if you don’t agree with all of it (I’m still on the fence on that…)
Following on from Brian’s link last week, here’s another short story from Nancy Kress that should appeal to many of you: “Explanations, Inc.” I imagine Jeff Houlahan reading this and going “See, this is why scientists should only worry about making predictions.” (just teasing, Jeff)🙂 (ht @gruntleme)
Ecoroulette calls out a really bad practice that lots of people are guilty of (including me, to my discredit): fast and loose citations. You know, where you cite something you read a long time ago, but you’re pretty sure you remember roughly what it said so you don’t go back and check. Or where you cite something based on just having read the abstract, or because you recently read a paper that cited it for a similar claim. Or where you cite something for what you took to be the take home message, even though that message was more something you read into the paper rather than something the authors intended. This is how mistakes (up to and including zombie ideas) start and get propagated, people! It’s like playing a game of “telephone” with the scientific literature.
Evolution of chess: the opening moves used by good chess players have become more diverse over time. This is in part an artifact of increasing sample sizes over time, but not entirely. (ht Marginal Revolution)
Simply Statistics with ten lessons from statistics for “big data” analysis. But really, it’s ten good rules to follow for any data analysis. Though some of them, like doing exploratory analysis on a random subset of your data, are easiest to implement if you have lots of data.
Data Colada on the false consensus effect (basically, we all think our own experiences are more typical than they really are). Also how the false consensus effect need not prevent the actual consensus from being accurate. A nice little metaphor for how science is supposed to work, at least ideally–individual investigators may be biased in all sorts of ways, but the consensus can still be accurate.
Do college graduates earn more than non-graduates? Yes. Is it because they went to college? Maybe, at least in part–but most of the evidence you see cited on this topic is pretty weak.
This has nothing to do with ecology, but it has to do with baseball, so I’m linking to it. Even though it’s really old. Contrary to the urban legend among New York emergency room doctors, baseball bat injuries in New York do not jump after “Bat Day” at Yankee Stadium.
And finally, this is totally to do with ecology, since it involves a plant: an Englishman has just spent 13 years carving his hedge into a 45 meter-long dragon.🙂