Friday links: rejected classic papers, great interview with Peter Kareiva, crowdfunding=bake sale, and more

From Jeremy:

Think you’re the only one who gets rejected? Think again, it happens to everyone. As illustrated by this paper on how even now-classic articles by Nobel Prize-winning economists initially were rejected. Similar incidents have occurred in ecology, as Meg has attested. In evolution, both of George Price’s hugely important Nature papers initially were rejected. And as The EEB and Flow notes, Joe Felsenstein’s hugely popular PHYLIP software has been repeatedly rejected for funding, both before and after it was first developed, a fact Joe memorializes in PHYLIP’s “No thanks to” list. Anyone know of any other really famous ecology & evolution work that was rejected initially? Which isn’t to say that you should always keep doggedly trying to publish an idea that’s been rejected. But deciding if/when to give up can be a difficult, and even heartbreaking, decision. (HT Paul Krugman)

BioDiverse Perspectives has a great interview with population ecologist and Nature Conservancy Chief Scientist Peter Kareiva. Much of it is on how and why to do policy-relevant science, a topic Brian hit on earlier this week. Much of what Kareiva has to say resonates with Brian’s post. Here’s an extended quote from Kareiva to give you the flavor:

Ecology matters to the general public because ecology is about water, pests and pestilence, recreation, food, resilience and so forth…focusing so narrowly on producing graphs that on the horizontal axis display number of species and on the vertical axis report some dependent ecological function (that is distantly related to human well-being) strikes me as not worth so much research…Our mistake has been to focus too much only on the one narrow dimension of nature that systematic biologists, natural historians, and a portion of ecologists care about: biodiversity.  Understand nature in a way that serves the public, not yourself. And remember, biodiversity as a label didn’t come into fashion until the late 1980s. There was a tremendous amount of conservation ecology that produced a wealth of understanding and useful insight before the biodiversity meme. My prediction is that in 2030, we will not be talking about biodiversity anywhere near as much as we do now—instead we will be asking how nature can make humans more resilient to climate disruptions, and what are the limits we should avoid crossing if we want to maintain a reliable supply of food and water.

Apparently this interview is the first in what will be an ongoing series of interviews with visiting speakers at the University of Washington. That’s a great blogging idea, it’s been done a bit before and I think it could be done a lot more. And kudos to Hillary Burgess and Halley Froelich, the grad students who conducted the interview, for having the guts to interview someone who told them that the entire BioDiverse Perspectives website was dedicated to the wrong thing!

Mike the Mad Biologist finds that there’s nothing new under the sun when it comes to scientific fraud.

Here are data on the length of the average dissertation. I leave it to you to decide if “above average” is a good or bad thing in this context.😉

Quote of the week, Twitter edition: Terry McGlynn says that “Crowdfunding science is as sad as a bake sale for education.” Discuss.

There may be tribes in science. But not only are there tribes in economics, they’ve been studied by an anthropologist! Just kidding of course, the linked article is a joke–a very funny one, even if you don’t know much about economics (though it’s funnier if you do know some economics). Someone should write something similar for ecology. (HT Worthwhile Canadian Initiative)

I didn’t see this until it was too late to participate, but earlier this week E. O. Wilson did an “ask me anything” on Reddit. Sadly, I don’t think anyone asked him if he reads Dynamic Ecology.😉 And that’s probably for the best. I only skimmed it, it mostly looks like questions from admiring fans. There are a few interesting nuggets, such as Wilson’s answer to a question on the most promising fields of ecology and entomology:

I believe that the greatest leaps will be in ecology. The systems are so complex, depending on mostly little known interactions of many species that we have not begun to understand how the entirety of it works. This is a great subject for young scientists to go into, both to explore the ecosystems and define new ways to analyze them.

(Although arguably, whether ecology is “complex” depends on how you look at it) And I smiled at the anonymous evolutionary biologist who prefaced a question about the Nowak et al. kerfuffle with “You have taken more criticism in the last few years than you did over many previous decades.” To which Wilson surprisingly did not respond, “Umm, you’re aware that I wrote this, right? And that protestors at the AAAS meeting dumped a bucket of water on my head because of it?” (HT Terry McGlynn, via Twitter, and reader Artem, via email)

Species distribution modelers often choose their software based on ease of use, its use in previous publications, or on the recommendation of friends. Which the authors claim is a serious problem, but honestly, their arguments for this claim seem pretty weak. They say that trusting one’s colleagues when it comes to software choice is a risky thing to do–but peer review comes down to trust too! I mean, sure, if there’s a bug in someone’s code then that’s a problem. But there are lots of ways for a scientific study to go wrong, and bugs in code is only one of them. And this study has no data on how often it’s a problem. The authors just take survey data on how people choose their software and then leap to conclusions about the purported negative consequences of their choices. And they just assume that this problem needs solving via big changes to how students are trained and how peer review is conducted, with no attempt to balance the proposed changes against their costs, or against the benefits of the status quo.

The CEO and CFO of Plos left the company on the same day?! That’s really unusual for any organization of any size, for-profit or non-profit. I have no idea what this means. I suppose it might not mean anything, at least not anything important–or it could mean something really bad, or somewhere in between. I’m curious: are readers who admire Plos and support its goals worried by this news? Again, I have no idea if you should be worried; I’m just wondering. I mean, if, say, the Ecological Society of America announced that both the President and Treasurer of the Society were leaving their posts, effective in 10 days, I’d be a little worried. But maybe that’s a bad analogy? (HT Scholarly Kitchen)

Hoisted from the comments:

Brian, Jeremy, and ace commenter Margaret Kosmala discuss how to choose the right postdoc. Starts here.

4 thoughts on “Friday links: rejected classic papers, great interview with Peter Kareiva, crowdfunding=bake sale, and more

  1. I can’t wait until I’m senior enough to put a “no thanks to” section on my website like Felsentein’s. I don’t think I should give names for things said off the cuff to me in personal conversations, but I have had multiple national academy members tell me that their most cited paper got rejected at multiple journals before getting accepted. About 10 rungs down the ladder from them, but what I still consider my best paper got rejected multiple times and ended up in a fairly low ranking journal while work I consider much less important flew into high ranking journals. Peer review does a lot of things well, but identifying the best of the truly novel ideas is not one of them.

    Thanks for the pointer to the piece in Science on SDM’s. I also felt like the authors were perhaps overdramatizing their conclusions – recommendations from colleagues are probably a good way to go,not bad. But their basic point about how many people are willing to publish papers with their name on it without understanding what their analysis from a point-and-click piece of software from a 3rd party just did is I think important and a bit frightening. The particular piece of software cited, MaxEnt from Steven Phillips, is very carefully done by a very careful and smart scientist and then pounded on by a lot of smart people (thereby probably justifying people’s faith in the software). But software is only half of the package that produces the results. The person pushing buttons matters too. I would estimate only 1/4 of the people using MaxEnt actually understand its basic principles. Can’t begin to tell you the number of times I’ve heard people interpret the numbers coming out in ways that are not founded, thereby maybe not justifying untrained/unsophisticated users using complex software.

    There is a real danger in wrapping up complex software in simple to use packages. I could start a whole new post on various R packages (not the core, but the add on packages which are submitted with no scientific peer review or testing of computational accuracy) that are flat out wrong but people blithely use them too. I have stopped several papers from going forward during peer review that used packages that I know to produce wrong results and the author’s didn’t have a clue. I admit to being a crusty old curmudgeon extremist but I pretty much never use off the shelf software. If I don’t understand something well enough to program it myself or at least benchmark it against things I do understand and/or on toy datasets where I know what the answer should be, then I don’t use it. In those cases I just use a simpler method that I do understand. Shades of a statistical machismo rant starting again, so I better stop!

  2. “Crowdfunding science is as sad as a bake sale for education.”
    Yes, sad for major stuff. But, like an education bake-sale, great (and not sad) for add-ons and mini-projects. The SciFund efforts typically generate a few thousand dollars, which is tidy chunk of change for a grad student summer of field research or getting some preliminary data for a larger effort.

    That being said, looks like my peeps and I are going to be trying some serious crowd funding soon. Pre-proposal non-invite (after being invited for full proposal last year…) Traditional funding is getting too stochastic.

    • Yeah. That stochasticity isn’t good good for building solid research programs. I’ll take easy money and engage the public and if they like it, great! I imagine for conservation-oriented projects it could work well. The Nature Conservancy sponsors lots of research tied to their land management practices, and you could argue that all NGOs are essentially crowd-funded.

      Good science, and the training of scientists as a part of these projects, is not only in the public interest for the creation of knowledge but also it’s an economic investment. Basic research is money in the bank, study after study has shown. It’s really short-sighted for the people (meaning, our governments) to cut funding to a trickle. I’m going to be okay for the next 2-3 years, I’ve recently learned, but I’m and my students are among the lucky few.

  3. Hi Jeremy–thanks for the shout out! Just a note, we plan to interview visiting speakers at universities all over the world. So get ready to hear some more cool opinions on biodiversity!

Leave a Comment

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s