Also featuring this week: the changing landscape of blogging, Darwin’s pet tortoise, Richard Feynman’s comments from beyond the grave on post-publication peer review, and more…
This video is a really excellent introduction to community ecology, trophic cascades, ecosystem engineers, and indirect effects. I added it to my list of videos that can be used while teaching ecology. I need to do a more thorough update of that list. If you have other suggestions for videos to add to that list, please let me know!
The Royal Society is hosting a Wikipedia edit-a-thon on March 4th, with the goal of increasing the number and quality of entries for women scientists on Wikipedia. The article indicates that a secondary goal is to get more women editing Wikipedia (it reports that currently just around 9% of editors are women). I have very slowly been working on some of the pages for women ecologists (emphasis on “very slowly”), but perhaps I can try to set aside some time on March 4th to make some more progress on some of them – and hopefully some of you will, too!
And, just for fun, as my videos for teaching ecology post indicates, I’m a big fan of David Attenborough, so I knew I had to watch him narrating a curling event from the Sochi Olympics!
I don’t ordinarily plug individual papers in ecology journals, but this one promotes a cause near and dear to my heart so I’m giving it a shout-out. Writing in American Naturalist, Mark McPeek discusses a neglected classic in the theoretical literature on species coexistence: Levin 1970. Modern coexistence theory is way more general than the Lotka-Volterra model or MacArthurian resource overlap models, and way more precise and useful than verbal niche models like Hutchinson’s. And arguably, modern coexistence theory starts with Levin 1970. And not only is Levin’s paper a classic, it has an unusual and amusing epilogue.
The scientific publishing system will never be “fixed”, for the simple reason that people disagree on what’s valuable science (or good science, or interesting science, or important science, or novel science, or high-impact science, or any other metric you care to name). And if you think the “fix” is to just publish everything technically sound and let readers decide what’s valuable, well, click through and think again. Echoes my own thoughts. (ht Denim and Tweed)
But publication bias is fixable. At least, so says this new preprint, which suggests a clever way to obtain an unbiased estimate of a true effect size, solely from knowledge of reported sample sizes and P values from published studies. I’ve only skimmed it, but unless I missed some big problem (did I?), it seems quite clever and useful. (ht a commenter on Andrew Gelman’s blog)
Welcome to the era of “Big Replication” (ht Andrew Gelman). A bunch of psychologists (the “ManyLabs” project) haphazardly picked 13 key studies of a range of psychological phenomena, all of which could be tested with simple experiments that could be administered online. And they replicated all of them with a frickin’ massive sample (total of over 6000 subjects from all over the world). And just as important as the massive sample was the pre-planning: the group decided every aspect of their data processing and analysis protocol in advance, pre-registered it, and stuck to it, so there’s no possibility that the results reflect researcher degrees of freedom or p-hacking or whatever. Excellent, accessible discussions here, here, and here. Turns out some effects replicate while others don’t (as many psychologists expected, social priming effects, like “showing Americans a US flag makes them more patriotic”, are just statistical flukes at best; perhaps they should’ve bet on this!). Another interesting take-home for me was that there’s surprisingly little variation in effect size across countries or cultures. That to me is one of the best arguments for this sort of work. You’re not out to prove the original study wrong, you’re out to discover if the original result is robust to background variation. As one of the linked posts notes, there’s no reason why the same approach can’t be taken to test novel hypotheses:
Here is how this might go.(1) A group of researchers form a hypothesis (not by pulling it out of thin air but by deriving it from a theory, obviously).(2) They design—perhaps via crowd sourcing—the best possible experiment.(3) They preregister the experiment.(4) They post the protocol online.(5) They simultaneously carry out the experiment in multiple labs.(6) They analyze and meta-analyze the data.(7) They post the data online.(8) They write a kick-ass paper.
Ecologists were way ahead of psychologists on this, in the form of the NutNet project.
Chris Bertram of Crooked Timber (a very long-running and successful academic blog) on the changing landscape of blogging. Relates to issues we’ve discussed. Stick with it until the end for a funny analogy to the music business.
Ecologists don’t call themselves “ecological biologists”. So why don’t evolutionary biologists call themselves “evolutionists”?
Good advice for pre-tenure faculty members on how to be strategic about your “service” obligations.
Goodbye to all that: a journalist and editor explains why he left academia despite having long been sure that he wanted to be a history prof, and having been on track to achieve that goal. Good thoughtful essay. Not just about the academic job market, but also about risk-taking and the desire to write for a bigger audience. Resonates with one of our best guest posts ever.
Advocates of post-publication review over pre-publication review usually emphasize how only two or three people (whom you don’t know!) read a paper before it’s published, while lots of people read it after it’s published. Surely that large post-publication “sample” is better, right? Well, not if those post-publication readers are just skimming, or just reading the abstract, or just glancing at the figures (which they are, mostly). As Richard Feynman once wrote in a closely-related context:
This question of trying to figure out whether a book is good or bad by looking at it carefully or by taking the reports of a lot of people who looked at it carelessly is like this famous old problem: Nobody was permitted to see the Emperor of China, and the question was, What is the length of the Emperor of China’s nose? To find out, you go all over the country asking people what they think the length of the Emperor of China’s nose is, and you average it. And that would be very “accurate” because you averaged so many people. But it’s no way to find anything out; when you have a very wide range of people who contribute without looking carefully at it, you don’t improve your knowledge of the situation by averaging.
Charles Darwin’s pet Galapagos tortoise “James” was rediscovered earlier this month. Well, its shell was, in a storeroom at the Natural History Museum in London. Will have to file this tidbit away for next year’s Darwin Dinner quiz.🙂
And finally, this has nothing to do with ecology, but it’s really cool (and brings back fond memories for me): historical photos of London merged with modern-day photos of the same locations. The Museum of London has published an app that lets you create your own.