From Jeremy:
What are you doing here? How come you’re not reading Small Pond Science instead? I know we just plugged this blog recently, but I wanted to do so again. Terry McGlynn continues to put the rest of the ecology blogosphere in the shade by cranking out great posts on everything from why having an “elevator pitch” about your research may actually be a bad idea, to writing papers with undergrad co-authors, to finding collaborators who can quickly help you finish off a project. Plus, he throws in fun (and sometimes amusingly cranky) asides about everything from how his dean thinks he’s taking tropical vacations (as opposed to doing research in the tropics), to the movie Pulp Fiction, to being star-struck when seeing Bert Hölldobler eating lunch. C’mon, click through!
I don’t usually just link to new papers from the ecological literature, figuring that people mostly already have their own ways of filtering the literature without help from me. But I’m making a couple of exceptions this week. Here’s the first one: David Warton has come up with what looks like a very important new result on MaxEnt (“maximum entropy”). He’s proven that a common application of MaxEnt (to modeling species distributions with presence-only data) is equivalent to a GLM (Poisson regression), with the apparent differences actually being due to differences in how the methods are traditionally applied. For instance, MaxEnt uses a “lasso” penalty that is rarely used in Poisson regression, but could be. See the Methods in Ecology and Evolution blog for a nice post from David, discussing these results and linking to key papers. As David notes, this proof undermines both standard defenses of, and standard criticisms of, MaxEnt! You can’t defend MaxEnt as making fewer assumptions than other approaches if it’s equivalent to a GLM. But nor can you criticize MaxEnt for being difficult to interpret, or lacking model checking procedures, if it’s equivalent to a GLM. Now I’m wondering if it’s possible to find similar proofs for other applications of MaxEnt, for instance to modeling the species abundance distribution and species-area curves.
Highly relevant to Brian’s recent posts on prediction: statistician Andrew Gelman asks what “explanation” is in a statistical sense, and why we might care about “explanation” as well as “prediction”, even if the explanations are only post-hoc and don’t improve our ability to make future predictions. And the illustrative example is about predicting vs. explaining Oscar winners, so that’s fun.
Phil Davis of The Scholarly Kitchen reports results of an informal straw poll on various new reforms to the peer review system. He finds that editors are mostly reluctant to use reviews from third party services like Rubriq, though of course that could change. And he finds that reviewers are open to incentives (including monetary ones), but many remain prepared to review for free, especially if they are confident that the ms will be good. Phil makes the interesting suggestion that the future might consist of two parallel, coexisting peer review systems. Selective journals with broad readerships, to which authors send what they regard as their most important and interesting work, will still be able to attract reviewers willing to work for free. But unselective journals and specialized journals with narrow readerships will need to offer financial incentives to reviewers, pay for the right to “bid” on pre-reviewed mss from services like Peerage of Science, or else take mss from services like Rubriq, in which authors provide the financial incentive to reviewers.
Joan Strassman sits on the NSF DDIG (Doctoral Dissertation Improvement Grant) panel. She has posts on how to write a good DDIG, and common mistakes to avoid.
The untweeted conference: Jeremy Yoder points out something I wasn’t aware of: Gordon Conferences strongly discourage attendees from discussing the conference in public online forums–no tweeting, no blogging, etc. This is because Gordon Conferences are for discussing unpublished research in progress, and the fear is that people won’t attend or won’t be as open to discussing their results if they’re afraid of being scooped or of being quoted on the internet. Which I can understand, especially for conference attendees from industry, although I think Jeremy offers some sensible pushback against these fears. My question is, why are the Gordon Conferences special in this way? After all, lots of people present and discuss unpublished research in progress at any conference. Are there are lots of people out there who worry about their unpublished work being tweeted or blogged about during, say, the ESA meeting? I’ve never been to a Gordon Conference myself. But I do know that they’re small, intimate, prestigious, and selective affairs (you have to apply to attend and present). So I’m wondering a little if one motivation for the “no public online discussion” policy is to maintain the exclusivity and prestige.
Steven Frank is writing an ongoing series of conceptual papers on the meaning of natural selection for the Journal of Evolutionary Biology. Challenging reading, and perhaps not everyone’s cup of tea, as the practical implications of the very deep conceptual points he’s trying to make aren’t always clear. But I continue to find them interesting, in part because his own views seem to be evolving. For instance, a few years ago he wrote a paper laying out formal analogies between the mathematics used to describe evolution by natural selection, and the mathematics used to describe information theory. At the time, he said it was an open question whether these analogies were merely abstract curiosities, or if they were a sign of some deep connection. Now, he’s come down on the side of “deep connection”, arguing that the information theory perspective is actually the more fundamental one. Fundamentally, evolution by natural selection is about populations capturing “information” about their environment via changes in gene frequencies. The conventional mathematical description of evolution in terms of various sorts of variances and covariances–think the Price equation, the breeder’s equation, quantitative genetics, Fisher’s Fundamental Theorem, all that stuff–works, but it’s an epiphenomenon. It doesn’t tell you what’s really going on. Still mulling this over and thinking about what the implications might be (e.g., for how we teach evolutionary theory, and for my own applications of the Price equation outside the context of evolution). Would also be interested to see the journal invite some responses to the series once its complete. I’m curious to hear what other conceptually-oriented evolutionary thinkers–folks like Ben Kerr, Peter Godfrey-Smith, Samir Okasha, Sean Rice, Alan Grafen, Allen Orr–think of Steven’s work.
Jarrett Byrnes is developing a new preprint repository, OpenPub, with tools for discussion and interaction. As part of the development process, he’s soliciting videos of people’s experiences with the scholarly publication process. I wish him luck, as I would like to see ecologists take up preprint servers. Though I think he has an uphill struggle ahead of him, at least on the commenting/discussion/interaction side. Various journals have had online commenting systems for years; they’re mostly unused. Nobody has any incentive to use them, and people who want to engage in commentary/interaction/discussion online (including discussion of preprints and new papers) already do so via blogs, Twitter, Google groups, etc. The sets of people with whom we discuss science just aren’t “paper-centered”. But I could be wrong, maybe the main reason existing journal commenting systems aren’t used is that they have bad user interfaces, as Jarrett and his collaborators suspect. Anyway, if an ecology preprint server with a good interface for commenting and discussion is something you’d like to see happen, click through and give Jarrett your input and support.
Finally, here’s an overview of the bleak short- and long-term prospects for research & development spending in the US. The sequester isn’t the only issue. But it is one of them; here’s the official memo on the short-term impact of the sequester on NSF. 😦
From the archives:
Why do our null models “nullify” some effects and not others? In which I push back against some bad approaches to building “null” models, and some bad arguments for those bad approaches. If you’re one of the many people who think that what null models are for is to test whether “there’s a ‘non-random pattern’ in my data”, you really ought to read this post and the comments.
Jeremy, the check’s in the mail.
How dare you question my integrity! I’ll never link to you again!
Just kidding of course, you’re welcome! 😉