UPDATE From Brian:
A very important opinion piece by Georgina Mace. This is her departing address after two years as president of the British Ecological Society. Unlike most of those addresses this.one is very substantative. She has been unusually successful as a scientist who does top notch basic research while also being on the front lines in conservation in an impactful way (e.g. her involvement with IUCN). And it shows. She basically argues that we need to really get serious in engaging ecology with global change, and not just in the shallow, get a grant way that Peter Adler called out so nicely. And more novelly, she argues that maybe ecological societies should be at the center of this. She’s got some good bits on grant trends, journal trends, what I called new-fangled ecology and other stuff as well. I strongly recommend a read.
From Meg:
Joan Strassmann asks: Do women ever get the faculty achievement awards at your university? They don’t at hers.
Matt MacManes has a post giving a glimpse from the inside of an NSF panel.
Here’s a story on Andrew David Thaler, creator of the #drownyourtown hashtag on twitter. He has created simulations for what different cities would look like if sea level rises 80 meters, as is predicted if the Greenland and Antarctic ice sheets melt. There are instructions on how to drown your town here, which seems like it could be a great activity for classes focused on global change. And you can also go to the twitter hashtag to get images of towns other people have drowned.
From Jeremy:
Via commenter Matt Spencer, a very interesting-looking 2010 paper from a history of science journal on the history of the International Biological Program (IBP). The IBP ran from 1964-74 and was an attempt to turn field ecology into what was then called “Big Science”. The Manhattan Project and the space program were seen as models to be emulated. The IBP was widely regarded as a failure (especially by population and community ecologists). But the paper argues that, while the IBP was indeed a failure in some ways, it succeeded in legitimizing “synoptic” data collection in ecology, and in paving the way for the Long Term Ecological Research (LTER) network. I haven’t read the paper yet but I’m looking forward to doing so. Like everyone, I have my own preferred approaches to science. But every approach has its own strengths and weaknesses, and every approach can be implemented more or less well. Looking forward to learning more about the history here, especially since it connects pretty directly to issues that are still being debated today.
As noted in a previous Friday linkfest, The Economist magazine recently ran a cover article on the unreliability of the published record of scientific research. Many of the issues raised are familiar and serious (e.g., publication bias). But Deborah Mayo catches a seriously confused argument that I missed in my skim. There’s a calculation in the article purporting to show that standard frequentist statistical tests of null hypotheses will mostly produce errors. The intended calculation is analogous to calculating the fraction of positive medical diagnostic tests that are false positives. If you’re trying to screen for some rare condition with a diagnostic test that has low but non-zero error rates, then most positive tests will be false positives, just because the condition is rare. For reasons Mayo explains, that argument doesn’t really work when you define the “rare condition” you’re trying to detect as “false null hypotheses” and your “diagnostic test” is a frequentist statistical test. This problematic calculation apparently traces back to a now-famous 2005 paper by John Ioannidis. (UPDATE #2: Actually, via Andrew Gelman I see that this calculation goes back to at least 1987). I’ve read that 2005 paper and Ioannidis’ subsequent work with interest, and plugged his work in Friday linkfests more than once. But I confess I haven’t thought about that particular calculation as critically as I should have. There certainly are good reasons to worry about whether we’re doing and reporting our statistics as well as we could–but this particular calculation isn’t among them, I don’t think. The analogy with diagnostic screening is just too loose.
Commenting on The Economist article, microbiologist and statistician Thomas Kepler said something I found interesting, connecting debates over statistical practices to the need for different sorts of scientists to appreciate the value of one another’s approaches:
Where statisticians see experimental biomedical researchers as corrupt strivers in need of policing, biologists see statisticians as uninterested in actual science and perfectly willing to hold up its progress indefinitely in the name of some imagined platonic ideal….Maybe they’re both right. But maybe raising the next generation to be just a little more appreciative and less defensive will contribute to the continued growth of the scientific worldview we all share.
Speaking of worries about lots of published findings being false, and so not replicable, two psychologists argue that conducting and publishing more attempts at replication will not make psychology better. Rather, they suggest that the ability to replicate published findings is a symptom rather than a cause of good science. Instead, what psychology needs is general theory, to give much better guidance as to what sort of experiments are worth conducting and what effects those experiments should find. Go read, then argue with your friends about its applicability to ecology. 🙂 (HT Ed Yong)
One more related link: Retraction Watch reports on a new paper examining the distribution of published P values in psychology papers from 1965 and 2005. In both years, there’s an excess of P values close to 0.05, but the excess is larger in 2005. The frequency of incorrectly reporting or rounding down P values just over 0.05 also has increased since 1965. I confess that I have yet to read the paper myself, and note with worry that Deborah Mayo pops up in the comments on Retraction Watch questioning whether the authors understand P values. So read critically and judge for yourself (which you should always do anyway!)
Dave Abson argues that ecologists put too much value on asking overbroad, and therefore silly, questions. He’s particularly negative on the conceptual diagrams often used to illustrate papers asking such questions, which he calls simply “made up figures”. A point which he drives home with a very funny conceptual diagram made-up figure. 🙂 I agree with him that such figures do have heuristic use, but that that use is fairly limited (for instance, such figures rarely can be treated as testable models, though unfortunately they often are). I’m more skeptical of his larger claim that overbroad questions and new ideas are overvalued. I don’t think broad, perspectives-type papers are crowding out other sorts of papers. The large majority of papers in ecology journals, including leading journals, are based on newly-collected data, new analyses of previously-collected data, or formal mathematical models. If the complaint is not with a particular type of paper, but that academic ecologists are concerned with generality and novelty at the expense of what Abson calls “small, focused, incremental” studies, well, that ship sailed decades ago. Plus, the advent of unselective open access journals like Plos One means that it’s now easier than it’s ever been to publish small, focused, incremental, non-novel studies, and for interested readers who know what they’re looking for to find and read such studies. If the complaint is with the rigor with which ecologists develop and pursue general, novel ideas, well, I guess all I’d say is that everybody worries about that and does the best they can. Ecology is hard, which makes ongoing discussion about our approaches necessary and healthy.
At Berkeley is a new documentary about how the flagship US public university dealt with savage budget cuts in the wake of the financial crisis. I hear it’s quite good. Though it may feel a bit too close to home for many of you, and with a four hour running time might be too much of a good thing.
This “climate models” calendar Kickstarter project features photos of climate modelers in landscapes relevant to their work, but in dressy clothes rather than field gear, and fashion shoot-style poses. Not sure what I think of this. The goal is the expected and laudable one: humanize scientists and thereby improve public understanding and appreciation of climate research. But I’m guessing that anybody who’s likely to buy this calendar already sees scientists as human beings and already understands and supports climate science. Plus, many of the scientists look like they’ve been photoshopped into backgrounds, and not especially convincingly. (HT Ed Yong)
An interview with John Bohannon, the investigative journalist who did the recent sting operation asking which fee-charging open access journals would accept a fake paper with dead-obvious mistakes. The sting was much criticized by some prominent open access advocates. Personally, I find Bohannon’s defense completely compelling. I hope nobody still thinks he was out to “get” open access journals, or that his study is invalid because he didn’t have a “control” or didn’t choose journals randomly, or that he’s a racist (!), or whatever. (Oh, and for anyone inclined to jump to the unjustified conclusion that because I’m defending Bohannon I must be anti-open access: click this link and this one).
BioDiverse Perspectives has a wide-ranging interview with leading theoretical ecologist Simon Levin.
Science has a profile of experimental evolution pioneer Rich Lenski. One tidbit from the Lenski piece I wasn’t aware of: he almost stopped his long-term E. coli evolution experiment years ago, in favor of switching to work on “digital organisms” (a then-new method of simulating evolution on a computer). Over at his blog, Rich has a new post talking about the past, present, and future of the experiment. He’s thinking about who could take it over after he retires, and how to pay for it. I think the idea of endowing it is really cool–and while I don’t know much about fundraising, I’ll bet a combination of crowdfunding and phone calls to some key donors could make an endowment happen. 🙂
And finally, Fake Science is a very funny collection of spoofs on scientific infographics and informational posters. (HT NeuroDojo)
I would be interested to see a blog post from Rich Lenski on why he considered switching to digital organisms but decided not to. Such autobiographical “picking directions” posts are always interesting to read when you are trying to guide your own future.
Don’t tell us, tell Rich over on his blog! 🙂