The poll results are in (well, about as many as we’re gonna get, anyway)! So, according to our readers, what are the most and least successful big ideas in ecology?
A big thank you to everyone who completed the poll. I think the results are absolutely fascinating, and I hope you will too. And I hope we’ll have a really good conversation about the results in the comments. There is soooo much to chew on here! I’ll give you the results first, with some commentary after.
Attention conservation notice: This is a long post. But it’s worth your time–stick with it!
We got 118 responses as of this writing: 41% grad students, 19% postdocs, 18% faculty for <10 years, 17% faculty for 10 years or more, 4% other, one unspecified. As usual, it’s not a random sample of any statistical population, though it’s probably broadly representative of what our most avid readers think. And according to those respondents, the most successful big idea in ecology is…
Island biogeography theory! 83% of respondents called it “successful”, only two (1.7%) called it “unsuccessful”, and only 13% said it was a mixed bag. Only three votes for “don’t know/not sure/no opinion”.
The second most successful big idea in ecology (apparently):
Optimal foraging theory! 53% voted successful, 6% unsuccessful, 20% mixed bag, 20% no opinion. Oh, and one person voted “too soon to tell”. To which I can only respond:
Ok, in seriousness, I am sincerely curious about the “too soon to tell” vote here. Optimal foraging theory has been around for over 50 years now. It’s been the subject of a massive body of research. How much longer could we possibly need? I’m sure whoever voted “too soon to tell” has a cogent reason for doing so, and now I’m dying to know what it was.
The third most successful big idea in ecology (according to our readers):
R* theory! 34% voted successful, 5% voted unsuccessful, 19% voted mixed bag, 43% said no opinion. So fewer people had an opinion than for island biogeography or optimal foraging (and the frequency of “no opinion” responses is one interesting aspect of the survey). But of those that had an opinion, successful votes far outnumbered votes for unsuccessul or mixed bag, which makes R* theory pretty clearly the third most successful idea on my list in the eyes of the voters. (UPDATE: The comments revealed that many people know this body of work under other names. What I’ve called R* theory might be more familiar to you as “resource competition theory”. Or, you might know a particularly well-known subset of this body of theory, called “resource ratio theory”. Lack of familiarity with the term “R*” presumably explains why many voters were unfamiliar with this idea. Interesting, I had no idea that certain names for this body of ideas were much more well known than others.)
Opinion (among those who had one) was fairly uniform for the top three ideas. Not so for the next several. The following ideas all are controversial to at least some degree. I’ve listed them in rough order from most to least successful in the eyes of voters (I just did this in a gestalt-y way; you get the rigor you pay for around here).
The metabolic theory of ecology. 30% successful, 8% unsuccessful, 29% mixed bag, 9% too soon to tell, 25% no opinion.
r/K selection. 40% successful, 13% unsuccessful, 38% mixed bag, 9% no opinion. Also, one mysterious vote for “too soon to tell” on another decades-old, massively-studied idea. And not from the same person who voted too soon to tell on optimal foraging theory, either.
Biodiversity and ecosystem function. 30% successful, 15% unsuccessful, 42% mixed bag, 7% each for too soon to tell and no opinion.
Neutral theory. 28% successful, 20% unsuccessful, 35% mixed bag, 6% too soon to tell, 11% no opinion.
Diversity-stability hypothesis. 21% successful, 15% unsuccessful, 44% mixed bag, 7% too soon to tell, 13% no opinion.
Hump-backed model of the diversity-productivity relationship. 14% successful, 20% unsuccessful, 38% mixed bag, 27% no opinion. One surprising vote for too soon to tell on another 1970s-vintage idea; I’m desperate for these too soon to tell voters to comment!
The intermediate disturbance hypothesis. 17% successful, 26% unsuccessful, 45% mixed bag, 11% no opinion. One unexpected vote for too soon to tell.
Two largely forgotten (?) ideas
The next two ideas are older ones for which “don’t know/not sure/no opinion” was by far the largest vote category. That distinguishes them from the other ideas of similar vintage on my list, and from R* theory (a more recent idea that apparently is of more specialized interest, and so got lots of “no opinion” votes for that reason).
The ideal free distribution. 22% successful, 9% unsuccessful, 9% mixed bag, 59% no opinion. Also one vote for too soon to tell. Which raises the question of how long this voter will have to wait before being able to evaluate work on the IFD, given how few people even know enough about the IFD to have an opinion on it, much less work on it. 😉
Limiting similarity. 14% successful, 17% unsuccessful, 27% mixed bag, 42% no opinion.
A so-far controversial idea on which the jury is still out
MaxEnt. 14% successful, 12% unsuccessful, 17% mixed bag, 15% too soon to tell, 42% no opinion. Highest percentage of “too soon to tell” votes in the survey combined with the high percentage of no opinion votes and the newness of the idea (at least compared to the others on the list) makes me think the jury is still out. MaxEnt is like a Presidential candidate most voters don’t know much about. It’s the Jim Gilmore or Martin O’Malley of big ecological ideas.* 🙂 Ok, in seriousness, it’s probably more like a newer R* theory, with the high proportion of “no opinions” indicating that the idea is of somewhat specialized interest, at least so far.
*Please tell me I’m the first person to compare MaxEnt to Jim Gilmore or Martin O’Malley. I am? Good. 🙂
Comparing faculty and trainee opinions reveals some big differences
Turns out there are two kinds of controversial ideas in this survey: those on which voters at every career stage had a mix of views, and those on which faculty had very different views than grad students, with postdocs often being intermediate. This was super-interesting to me.
Only four faculty (=10% of them) think the IDH is a success (and three of those votes came from faculty who voted identically on every item, a few minutes apart, which makes me wonder if we accidentally had a bit of vote duplication). 37% of faculty think it’s unsuccessful. Only 10% of postdocs think the IDH is successful vs. 27% thinking it unsuccessful. In contrast, 25% of grad students think it’s a success, and only 21% think it’s unsuccessful.
Similar story for the hump-backed model. Only 10% of faculty think it successful, vs. 27% who think it unsuccessful (so, almost a 1:3 ratio for successful:unsuccessful). Contrast that with the 17% of grad students who think it successful vs. 21% thinking it unsuccessful, and the 18% of postdocs who think it successful and 18% who think it unsuccessful. (Note: you want to look at ratios of successful and unsuccessful votes because many more grad students than faculty have no opinion of the hump-backed model.)
Same for limiting similarity. Only 15% of faculty think it successful (and one of them is now rethinking his vote), vs. 21% thinking it unsuccessful. (And interestingly, all but one of those “unsuccessful” votes came from people who’d been faculty for >10 years.). In contrast, 15% of grad students think it successful and only 6% think it unsuccessful (many had no opinion). Postdocs voted 14% successful, 14% unsuccessful, so they’re again intermediate in terms of the ratio of successful:unsuccessful votes.
Faculty who think the ideal free distribution is successful outnumber those who think it unsuccessful 12:3. The same ratio for grad students is 6:7 (many grad students, and some faculty, had no opinion). It’s 8:0 for postdocs, so on this one postdocs are close to, but even more extreme than, faculty.
For r/K selection, faculty votes for success outnumbered votes for unsuccessful 15:7. For grad students, it was 23:2 (!). Postdocs look more like faculty, voting 8:5.
Voters who’ve been faculty >10 years overwhelmingly see the diversity-stability hypothesis as a mixed bag (15/20). Voters at other career stages were much less likely to see it as a mixed bag. But otherwise opinions on the diversity-stability hypothesis were similar across career stages.
Opinions of MaxEnt were broadly similar across all career stages for voters who had an opinion. But as with several other ideas (though not all), grad students were less likely to have an opinion than faculty. (Aside: MaxEnt and R* theory were the only ideas on which a substantial proportion of faculty said they had no opinion.)
There’s no obvious difference between faculty and non-faculty opinion for the other ideas.
Obviously, there are sample size concerns here. But to my eye, I doubt we’re just seeing noise here. Rather, I bet what we’re seeing is that there are several old verbal ideas that are still in the textbooks–the IDH, the hump-backed model, r/K selection, and limiting similarity–that grad students think of as successful because they learned them as undergrads. Then, those who go on to become postdocs and maybe faculty learn that those ideas are now widely (not universally) questioned and considered out of date. Of course, that’s not the only possibility. Another is that for some reason grad students who are more skeptical of those old verbal ideas are more likely to become postdocs and later faculty! Anyway, if my interpretation is right, then the question is, how come we aren’t teaching these ideas in such a way that students have the same views about them as faculty? Shouldn’t our teaching–even of classic ideas–reflect the current view of the field? Then again, at least when it comes to limiting similarity and perhaps the hump-backed model, apparently we’re not teaching these ideas. Since many grad students who have no opinion presumably haven’t heard of these ideas at all.
Some further comments:
- Are you surprised by any of these results? I’m mostly not, because my own views broadly line up with the general consensus, with the exception of r/K selection. The only big surprise for me is that the ideal free distribution is so unfamiliar to so many people. Of course, it’s quite possible that people who choose to read this blog are more likely to think like Brian and I than the average ecologist is. Probably, if you’re an IDH fan, this is your least-favorite ecology blog. 🙂
- The broad pattern here is that the ideas that are seen as least successful are the old verbal ideas without any process or mechanism behind them, or else some too-vague-to-test mechanism. Again, r/K selection is an exception–perhaps because it started out somewhat quantitative, and because later mathematical theory that expresses the same general idea in a more effective way? Anyway, if that interpretation is right, it seems like progress to me! And in a way, metabolic theory is another exception, because that idea is actually a bunch of interconnected ideas based on a bunch of assumptions. Many of which actually aren’t mechanistic at all, despite the theory’s reputation as being rigorously derived from “first principles” of physiology and adaptive evolution. So perhaps the general principle here is actually that ecologists these days prefer ideas that have been expressed mathematically, and/or that make quantitative predictions?
- As I suspected, most of these ideas are at least somewhat controversial. Some are very controversial–views are about evenly split between successful, unsuccessful, and mixed bag. Another reminder that we ecologists all disagree with one another a fair bit, even on quite basic matters like the success or otherwise of the biggest ideas in the field. I really hope commenters will chime in and explain why they voted as they did. I’m particularly curious about the extent to which different views reflect different definitions of success.
- I bet a lot of the disagreement about diversity-stability’s success comes down to different definitions of what the hypothesis is in the first place. “Diversity-stability” is kind of an umbrella term for lots of loosely-related or even unrelated ideas.
- A lot of students may not know about limiting similarity. But maybe they’d be better off if they did–there’s a reason it’s used a cautionary tale in both leading community ecology textbooks. Then they’d be in a better position to recognize that unsuccessful idea when it gets revived under other names.
- In the comments on the poll, Brian and I discussed a really interesting issue: to what extent can you separate the “idea itself” from how it was applied or studied or used? For instance, can you argue that neutral theory is widely seen as unsuccessful or a mixed bag because it’s been “let down” by those who’ve tried to test it? Everybody running out and fitting alternative models to species-abundance distribution data, which we now realize (and I think should’ve realized from the get-go) is a totally ineffective way to distinguish neutrality from non-neutrality. Maybe the “idea itself” is actually really good and useful, and we’d all recognize that if only people had tried to test it differently? But conversely, I’m sure that the reason optimal foraging theory is seen as so successful is that it’s one of the great examples in ecological history of really good experiments tightly linked to theory, providing rich information about exactly what assumption of the relevant theory was right or wrong.
- Here’s another interesting issue: to what extent does the fate of an idea depend on how good the initial tests are? Brian suggests the example of neutral theory here again. Or think of the hump-backed model, where the initial tests consisted of looking for a hump in a few observational datasets–and decades later the leading edge of research on this topic still consists of arguing about whether this or that observational dataset “really” has a hump. As a positive example, think of optimal foraging theory or R* theory, for which the initial tests were really well-designed experiments directly testing the theory. That arguably “set the tone” for subsequent work. I’ve remarked more than once that it’s to R* theory’s credit that pretty much every attempt to test it is a really good test. On the other hand, it’s not clear to me that island biogeography went on to become super-successful because everybody followed the lead of Dan Simberloff’s famous initial test of the idea. It’s not as if everybody ran out and did lots of island size manipulation experiments. So I dunno–does the eventual success of an idea depend on whether the initial tests are really good?
- I bet that disagreement about R* theory comes down to people who don’t like that it’s only been tested in a limited range of systems, vs. others (like me) who think it’s great that every test of the theory has been a high-quality test and who aren’t bothered if those tests happen to be in the limited range of tractable model systems permitting high-quality direct tests. So, is it to the credit or discredit of a theory if it can only be directly tested (or at least, has only been directly tested) in some limited range of model systems?
Thanks for reading all this way–looking forward to your comments!