Friday links: does Gaad exist, stories behind classic ecology papers, evolution of chess, and more (UPDATEDx2)

Also this week: citations as a game of “telephone”, a paper by Vincent Van Gogh (sort of), the false consensus effect, and much more. Lots of great stuff this week! Including a man who has either too much or too little time on his hands depending on your point of view (no, not me).

From Brian:

Jeff Ollerton has a clever post on whether G.Aad – a lead author on the Large Hadron Collider papes with an h-index over 20 really exists (or is a clever pun). Closer to our own field, has anybody ever met Dr M.V. Van? Because he very conveniently makes this paper  on ESS (Evolutionarily Stable Strategies) authored by Vincent, Van, Goh. I happened to know Tom Vincent personally (sadly passed away) and Dr Goh has made numerous contributions in mathematical ecology, but somehow, despite being on the same campus for 8 years, I never managed to bump into Dr MV Van…🙂

From Jeremy:

With a bit of searching here, you can find “story behind the paper”-type commentaries on many classic ecology papers from before 1990. I linked to this in an update on a recent post, but wanted to highlight it again because most readers likely missed the update. (ht Ric Charnov, via the comments)

Interesting news piece in Science this week on ongoing attempts in social psychology to replicate previously-published experiments. I found this most interesting as an illustration of changing norms and culture clashes (see this old post). Some of the researchers who work wasn’t replicated feel that they’re being singled out and persecuted. And it’s not hard to see why, even if you don’t agree. The replicators are by their own admission not focusing their replication efforts at randomly-chosen studies, but rather on prominent, easily-replicated studies on specific topics (apparently, just as post-publication review is mostly for the scientific 1%, so are formal replication attempts). The replicators seem not to have made much effort to involve the authors of the original studies (UPDATE #2: As a commenter points out, my phrasing here is unclear. The original authors got to review the replication proposals, but weren’t further involved). And one of the replicators wrote a blog post in which he described the failed replications as “an epic fail as my 10 year old would say.” I agree with Daniel Kahneman’s comments in which he calls for a “replication etiquette” that includes good-faith efforts to involve the original authors. I think we’re going to need such an etiquette to develop if these sorts of replication attempts are going to come to be universally seen as a normal part of science. (Of course, just because you feel with some justification that your own work is being singled out for scrutiny doesn’t mean your work is right. And if other people decide not to fund your work or publish your papers because they think your results are wrong, well, there’s nothing unfair about that, and it’s not persecution. That’s just the way science works.)

UPDATE: Via Small Pond Science, news that the Association of Tropical Biology wants people to try to reproduce classic tropical ecology experiments. Apparently their EiC, Emilio Bruna, was inspired by the reproducibility efforts in social psychology. Wow! Definitely worth keeping an eye on. FWIW, I suspect that Emilio’s not the only EiC who would be happy, indeed eager, to publish a paper trying to reproduce some classic ecological experiment–no matter how the results came out. Especially if the sample size was large.

I’m a bit late to this, sorry: evolutionary biologist and blogger Pleuni Pennings just got a faculty position, at San Francisco State University (congratulations!) She’s excited, here’s why. Perhaps of particular interest to those of you who want a job involving lots of research, and who mistakenly think that such jobs exist only at big research universities.

Continuing the theme of “old posts from Pleuni Pennings that I failed to notice until just now,” here are her 11 things to look for when choosing a postdoc. One quibble: she suggests you should start trying to carve out your own “niche” a couple of years into your postdoc, whereas I’d say it’s never to early to start doing that. Even grad students can start doing it.

I need to look at Charles Goodnight’s blog more, he’s really good at thinking out loud about interesting scientific topics. Here’s a great post talking about the challenges of making Sewell Wright’s famous “adaptive landscape” metaphor more concrete. And I love the final line: “[I]f this essay sounds a bit confused, it is because I am also confused by this topic.” (even though the essay didn’t sound at all confused to me!) And here’s a slightly older one digging into serious technical issues in how to interpret models of multi-level selection, that manages to work in a reference to the movie Clueless. Great stuff, even if you don’t agree with all of it (I’m still on the fence on that…)

Following on from Brian’s link last week, here’s another short story from Nancy Kress that should appeal to many of you: “Explanations, Inc.” I imagine Jeff Houlahan reading this and going “See, this is why scientists should only worry about making predictions.” (just teasing, Jeff)🙂 (ht @gruntleme)

Ecoroulette calls out a really bad practice that lots of people are guilty of (including me, to my discredit): fast and loose citations. You know, where you cite something you read a long time ago, but you’re pretty sure you remember roughly what it said so you don’t go back and check. Or where you cite something based on just having read the abstract, or because you recently read a paper that cited it for a similar claim. Or where you cite something for what you took to be the take home message, even though that message was more something you read into the paper rather than something the authors intended. This is how mistakes (up to and including zombie ideas) start and get propagated, people! It’s like playing a game of “telephone” with the scientific literature.

Evolution of chess: the opening moves used by good chess players have become more diverse over time. This is in part an artifact of increasing sample sizes over time, but not entirely. (ht Marginal Revolution)

Simply Statistics with ten lessons from statistics for “big data” analysis. But really, it’s ten good rules to follow for any data analysis. Though some of them, like doing exploratory analysis on a random subset of your data, are easiest to implement if you have lots of data.

Data Colada on the false consensus effect (basically, we all think our own experiences are more typical than they really are). Also how the false consensus effect need not prevent the actual consensus from being accurate. A nice little metaphor for how science is supposed to work, at least ideally–individual investigators may be biased in all sorts of ways, but the consensus can still be accurate.

Do college graduates earn more than non-graduates? Yes. Is it because they went to college? Maybe, at least in part–but most of the evidence you see cited on this topic is pretty weak.

This has nothing to do with ecology, but it has to do with baseball, so I’m linking to it. Even though it’s really old. Contrary to the urban legend among New York emergency room doctors, baseball bat injuries in New York do not jump after “Bat Day” at Yankee Stadium.

And finally, this is totally to do with ecology, since it involves a plant: an Englishman has just spent 13 years carving his hedge into a 45 meter-long dragon.🙂

14 thoughts on “Friday links: does Gaad exist, stories behind classic ecology papers, evolution of chess, and more (UPDATEDx2)

  1. I have a quick clarification about the replication efforts in social psychology. The proposals for the study were reviewed by the original authors whenever possible (e.g., original author needed to be alive) and 1 other reviewer.

    The whole issue is open access but the Editorial describes the process.

    Some of the controversy relates to how much input original authors should have once the data were collected and analyzed. This is a conversation worth having given some of the reactions to the effort. I think there are reasonable arguments on all sides about this procedure. Other fields may might want to contact the Editors for their insights about future efforts. I think norms and procedures for these efforts are evolving so they could have good insights.

    • “The proposals for the study were reviewed by the original authors whenever possible (e.g., original author needed to be alive) and 1 other reviewer.”

      Thanks, my phrasing on this isn’t great, will update the post.

      Agree that how much input the original authors should have once the data are collected and analyzed is indeed a key issue. My own view is that the appropriate time for their input is at the study design stage, where they should have substantial opportunity for input, though I’m unsure how best to do that. But after the study and analysis are pre-registered, I’m not sure that there’s the same need for input of the original authors. Indeed, part of the point of involving them at the design stage (hopefully, but not necessarily, to the point where they sign off on the pre-registered study design) would be to prevent or minimize the scope for them to complain, demand alternative analyses, etc. after the data are in.

      Yes, I’m sure it would be a good idea for editors in other fields who were thinking of encouraging similar efforts to draw on the experiences of social psychologists with this new way of working.

  2. Thanks for the link to my post on replicating classic experiments, Jeremy. I should note that it’s not an official ATBC activity yet – just and idea I floated as EIC to see what people thought. I only wish people would add to the list of “classic experiments”! Maybe your readers could chime in.

    • Your welcome Emilio.

      Re: adding to the list of classic experiments in tropical ecology, I’m not the best person to ask, obviously, not being a tropical ecologist. I do wonder if/how you count similar experiments as “replications”. I’m thinking back to Jeff and Angela’s recent guest post on “zombie ideas” about species interactions in the tropics here. I don’t know if there’s any one “classic” experiment on, say, strength of herbivory or plant defenses in tropical vs. temperate systems that anyone’s ever tried to replicate. But whether there is or not, there have been enough similar experiments on this topic for people to do meta-analyses of the results. So that seems like a case where we already have a lot of “replication” and don’t need more. So I guess I’d say that prime candidates for this sort of effort are classic experiments that are fairly unique–not many sufficiently-similar experiments have ever been conducted. Does that make sense?

      Or maybe what you want to do is not try to replicate classic experiments as closely as possible (but with a bigger sample size, presumably), but rather repeat them at as many sites as possible so as to get a sense of among-site variability in the results? NutNet is the model I’m thinking of here ( Maybe what you should look for is questions in tropical ecology that are ripe for attack via a NutNet-type approach? Simple, cheap repetition of a classic experiment, repeated at as many different sites around the world as one can manage?

      • Similar? Nope, not good enough. What I meant was true replication – the same design, the same species (ok, you can up sample size if you like) – with the goal of reproducing the original results. This is the standard in biomedical and other lab sciences and what the psychologists are attempting. The difference is that I have no expectation that we can actually reproduce those original results, since the locations and conditions will differ, and this is where the learning will come in. In some ways this is a bit like NutNet and other distributed experiments, and some of the large distributed experiments like NutNet do include tropical locations. BTW I don’t think it’s a ‘tropical thing’ – the questions are the same, the theory is the same – I just edit a tropical journal, so that’s why I pitched it that way. I’d be glad to see this kind of replication/reproduction attempted broadly.

      • Interesting – but I think the comparison of NutNet-type experiments with the goals of projects like ManyLabs confounds their objectives. One is an attempt to reproduce a previous result to determine if it is ‘real’. The other is an experiment on a huge scale, and variation is an expected outcome from which one gains insights. I think they’re different beasts.

  3. Good looking, varied bunch of links as usual Jeremy–thanks.

    I think most of us are guilty of some degree of citation fuziness or excess. I favor a sort of ruthless paring down to only those papers which are definitely germane to the principal thesis/theses of a paper. Otherwise you lead folks on some degree of a wild goose chase if they are trying to actually learn about the topic at hand. Countless are the times I’ve tracked down some reference, only to find that it doesn’t really support the argument being made by the citing paper.

    It would also be interesting to explore the relationship between citation excess and pseudo-consensi. I think there’s a relationship there, but it wouldn’t be an easy topic to tackle, methodologically.

    • “It would also be interesting to explore the relationship between citation excess and pseudo-consensi. I think there’s a relationship there, but it wouldn’t be an easy topic to tackle, methodologically.”

      Hmm, interesting possibility, never thought of that. But yes, very tough to tackle.

  4. Pingback: Does GAad exist? UPDATE – turns out that he does…. | Jeff Ollerton's Biodiversity Blog

  5. Along the same lines, Charles Goodnight says at his blog to compile the first letters of each paragraph in Goodnight and Wade (2000).

    Which I dutifully did. They spell:

    • Ha!

      That suggests we need a post on “Easter eggs” hidden in scientific papers (“Easter eggs” being the term for hidden messages, jokes, or bonuses in video games).

Leave a Comment

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s