Ecologists study lots of different things, and those things vary a lot in all sorts of ways. No two sites/times/organisms/populations/communities/ecosystems/species/landscapes/food webs/whatevers are exactly alike. Should ecologists seek generalities about the diverse, heterogeneous stuff they study? If so, what kinds of generalities should they seek and how should they seek them? Those are questions on which prominent ecologists have long disagreed. But the views of a few prominent ecologists might well be unrepresentative of the views of ecologists collectively, for the same reasons that the loudest voices in any discussion might not be representative of the views of everyone in the room. So to get a bit of data, I polled y’all on these questions. Here are the results!
As with all our polls, this isn’t a random or representative sample of all ecologists, or even of all readers of this blog. But it’s a big and diverse enough sample of ecologists to be worth talking about. I found the results very interesting, and a couple of them surprised me. They were so interesting I even went to the trouble of doing a PCA and making a pretty graph, as if I was Meghan or something. 🙂 So you should totally read on.
Reminder of the poll questions
The poll listed nine approaches to generality:
- Universal or nearly universal patterns or “laws”
- Simple theoretical models
- Statistical attractors
- “High level” theoretical frameworks
- Fruitful analogies
- Model systems
- Distributed experiments
- Long-term studies
Respondents were asked which of these they used in their own work (if any), with an option for “other”. Respondents were also asked how important is is for ecologists collectively to use each of those approaches to generality (unimportant, somewhat important, or very important, with an option for don’t know/not sure/no opinion).
We got 116 respondents. They’re a mix of faculty (49%), postdocs (19%), grad students (18%), non-academic professional ecologists (10%), and others (4%). They mostly do either basic research (38%), or a mix of basic and applied research (50%); only 12% just do applied research.
Results and commentary
Almost all ecologists seek or use multiple forms of generality in their own work. The modal respondent uses 4 of the 9 approaches to generality polled; the average respondent uses almost 5, and the max is 8. Only 4% of respondents indicated that they do not seek generality in their own work. So here’s the first interesting result. Contrary to the claims of some opinion pieces you might have read, ecologists who focus in laser-like fashion on the unique particularities of their own study system are a very small minority (or else were really undersampled by this poll, I guess…).
There weren’t any big differences among different sorts of ecologists in terms of the number of approaches to generality they use. Faculty and postdocs use slightly more approaches than grad students on average, which makes sense. More experienced researchers have had more time to use a greater range of research approaches during their careers. Applied researchers use slightly fewer approaches to generality than other researchers do, but I wouldn’t make much of this because it’s a small sample size.
Nobody chose “other” for the approaches to generality they use, which indicates that my list of 9 approaches to generality was pretty complete, at least in the minds of the poll respondents. Though I wonder a little if that would’ve remained true even if I’d listed fewer options…
Some approaches to generality are much more popular than others.
Fig. 1, below, shows the proportion of poll respondents saying each approach to generality was “very important” for ecologists to use, vs. the proportion saying it was “unimportant”. So the most important approaches to generality for ecologists to use are in the upper-left, the least important are in the lower right. These calculations ignored the rare respondents who expressed no opinion on the importance of a given approach.
According to the respondents, long-term studies are the most important approach to generality, regarded as very important for ecologists to use by 84% of respondents, and as unimportant by only a trivially small fraction. Next come several approaches with similar results: meta-analysis, distributed experiments, simple theoretical models, high-level theoretical frameworks, and universal laws/patterns. All were regarded as very important by 57-70% of respondents, and as unimportant by only a few. Bringing up the rear are three much less popular approaches to generality, though none were unpopular in an absolute sense: statistical attractors, model systems, and fruitful analogies. These were the only three approaches to generality regarded as very important for ecologists to use by only a minority of respondents (about 1/3 in each case). And they were the only three approaches to generality regarded as unimportant by an appreciable fraction of respondents with an opinion (up to 19% in the case of fruitful analogies).
Those results confirm my anecdotal impressions. Disappointingly to me (and to Tony Ives, I presume), ecologists mostly don’t seem to care all that much about analogies between ecological and non-ecological systems, or between very different-seeming ecological systems. I think recognizing and formalizing such analogies is both a lot of fun, and a powerful way to learn about ecology, but apparently I’m in a minority on that. And ecologists mostly don’t think that it’s particularly important to seek generality by working in model systems, so there’s another respect in which I’m in the minority. And as Brian predicted, the notion that some common patterns in ecological data might have statistical rather than ecological explanations is not a notion that many ecologists think it’s important to pursue (it’s also a controversial notion, as Brian can attest).
The most important approaches to generality are the most commonly used–with one big exception.
Why are some approaches to generality regarded as much more important to use than others? We can get some insight by looking at which approaches to generality ecologists use themselves. Fig. 2, below, plots the proportion of respondents saying that an approach to generality was “very important” for ecologists to use, against the proportion using that approach in their own work.
Notice that there’s a very tight positive correlation between the proportion of respondents who use a given approach to generality themselves, and the proportion who say it’s very important for ecologists as a group to use. No surprise there: people mostly like their own approaches and think that others should adopt them. (Though that’s not all that’s going on here; I’ll come back to that…). But there’s one big outlier: distributed experiments, the most rarely-used approach in the poll but one that a solid majority of respondents think it’s very important for ecologists to pursue. Apparently, not that many people are part of NutNet, but lots of people wish they were! 😉 I was surprised by this result. In other polls we’ve done on research approaches in ecology, it is unheard of for ecologists as a group to think so highly of any research approach that so few of them use themselves. I think that’s a big compliment to the NutNet founders and other pioneers of distributed experiments in ecology. And probably a sign that we can look forward to much more distributed experimental work in future.
Ecologists tend to want other ecologists to use widely-used approaches, even if they don’t use those approaches themselves. And they tend not to want other ecologists to use rarely-used approaches, even if they use those approaches themselves.
Look again at Fig. 2. Notice that all the points fall above the 1:1 line. That is, it’s not the case that respondents think it’s very important for ecologists to use all and only the approaches to generality that they themselves use. It’s not that everybody’s saying “all ecologists should do exactly as I do”. Rather, most respondents said that at least one approach to generality that they don’t use themselves is very important for ecologists as a group to use.
Further, the more widely-used an approach to generality is, the more likely it was to be cited as “very important” for ecologists to use by people who don’t use it themselves (with the already-noted exception of distributed experiments). That’s a big part of why the points towards the right-hand end of the x-axis in Fig. 2 are further above the 1:1 line than are points towards the left-hand end.
But even that’s not the end of the story. Because it’s also the case that people who use rarely-used approaches to generality are less likely than others to say that their own approaches are very important for ecologists to use. Only a bit over 50% of respondents who use fruitful analogies said that that approach is very important for ecologists to use. The same was true for respondents who use model systems and statistical attractors. In contrast, 3/4 or more of respondents who use every other approach to generality also said that approach was very important for ecologists to use.
All this echoes results from other polls of ours: ecologists tend to want others to use the most widely-used approaches in the field, because they use those approaches themselves and also if they don’t. Conversely, ecologists are tend not to want others to use little-used approaches, because they don’t use those approaches themselves and also if they do. I think these results illustrate just how profoundly our own individual judgments as to how ecologists ought to do ecology are shaped by how ecologists actually do ecology. It can be hard to zig when everyone else is zagging. It’s even harder to think that everyone else should zig when they’re actually zagging–even if you yourself are zigging.
Some approaches to generality are less widely-used because applied ecologists don’t use them.
Figure 3, above, shows a PCA on the approaches to generality that respondents use themselves. Data were centered and standardized to subtract out variation among respondents in how many approaches to generality they use, so as to focus attention on variation among respondents in which approaches they use.
Figure 3 shows that applied ecologists all fall towards the left end of PC 1. That’s because hardly any of the applied ecologists who took this poll use statistical attractors, universal patterns/laws, high-level theoretical frameworks, or fruitful analogies in their own work. Those approaches, along with simple theoretical models, all load positively on PC 1. So, part of the reason why statistical attractors and fruitful analogies are not as widely-used as other approaches to generality is because applied ecologists mostly don’t use them.
Fig. 3 also illustrates that no one PC axis explained more than a modest fraction of the variation in the standardized usage data. Usage of any given approach to generality isn’t all that strongly correlated with usage of any other approach in this dataset. Which is actually kind of interesting; that surprises me a little. For instance, I’d have thought that ecologists who work on universal laws/patterns would also tend to use simple theoretical models, and vice-versa. Apparently not.
There was no appreciable variation in which approaches to generality ecologists use that was associated with their employment (faculty vs. postdocs vs. grad students vs. etc.)
I took my own poll. I was a bit of an outlier in saying that every single approach on the list was very important for ecologists to use. I agree with Richard Levins in thinking that it’s best for ecology as a whole to use a (very) mixed strategy, whether we’re pursuing “generality” or any other research aim.
What do you think of all this? Looking forward to your comments.
One possibility (pure speculation here) is that more people might find things like laws/patterns/analogies or even statistical attractors to be very useful generally, but harder to use in some fundamental sense. In contrast, long-term studies and meta-analyses have implementation difficulties, but they don’t necessarily require the kinds of luck and conceptual creative sparks required for noticing general patterns (NB: personal opinion). Perhaps people said that certain approaches were useful even if they didn’t use them because they could at least see themselves using them, whereas other approaches seemed like something they had no ability to visualize themselves using.
I don’t want to be too philosophical here, but I wonder how much people responded to “ecologists should collectively use this approach” in a pragmatic way, as opposed to an idealistic way. I can definitely say that in some idealized sense I’d prefer more science of a certain kind, but constrained to our local operating equilibrium (i.e. reality as it is), I wouldn’t necessarily recommend moving in that direction.
“One possibility (pure speculation here) is that more people might find things like laws/patterns/analogies or even statistical attractors to be very useful generally, but harder to use in some fundamental sense. In contrast, long-term studies and meta-analyses have implementation difficulties, but they don’t necessarily require the kinds of luck and conceptual creative sparks required for noticing general patterns ”
I agree that the abilities and technical skills required to use statistical attractors as an approach are rarer among ecologists than the abilities and technical skills required to do a long-term monitoring study or a meta-analysis.
“I can definitely say that in some idealized sense I’d prefer more science of a certain kind, but constrained to our local operating equilibrium (i.e. reality as it is), I wouldn’t necessarily recommend moving in that direction.”
Hmm. I hope I’m not misunderstanding you here (apologies if I am), but I think this depends very much on exactly what you assume is unchangeable about ecologists. For instance, there was a time within the professional lives of today’s most senior ecologists–the 1960s and early ’70s–when both computer-intensive statistical analyses and simple theoretical models were *not* part of most ecologists’ toolboxes. That’s changed now. Same for meta-analysis. I’m old enough to remember when Jessica Gurevitch was just about the only ecologist who even knew what meta-analysis was. But thanks to Jessica Gurevitch, and others, that changed! So if someone were to say something like, “Ideally I think many more ecologists would use statistical attractors, but in practice few ecologists have the skills or desire to do that, so it’s infeasible for ecology to move in that direction”, well, the obvious response is “Why are you assuming that more ecologists can’t get good at the statistical attractor approach?” I mean, maybe there really is some insurmountable obstacle to ecology moving in that direction–but the argument needs to be made.
You could argue that approaches that have more “art” to them won’t ever be widely adopted. I wouldn’t say those approaches are harder to use, but I would say that you can’t get as far with them just by following a checklist or recipe. It’s possible to write down a checklist or step-by-step recipe for how to do (say) a meta-analysis. Following that checklist won’t necessarily guarantee a good meta-analysis, if only because there’s still some “art” to choosing a good scientific question that can then be answered with a meta-analysis. But following that checklist will guarantee that you can at least do a meta-analysis, and will usually ensure that the results are at least somewhat scientifically useful. In contrast, there’s no checklist or step-by-step recipe for spotting useful analogies between ecological and non-ecological systems. And there’s no checklist or step-by-step recipe for recognizing statistical attractors and demonstrating to others’ satisfaction that they are statistical attractors.
But maybe everything I just wrote is wrong! Maybe “art” is just a word for “people following checklists and recipes that I don’t know enough to follow myself.” Or maybe technical expertise is a necessary foundation for creativity, so that you can become more creative as a scientist by acquiring more technical expertise. For instance, once I really understood the Price equation from a technical perspective, I was suddenly much better at the “art” of spotting “creative” applications of it.
Personally, I’d like to get better at the statistical attractor approach so that I can explain the pattern discovered by Hatton et al. 2015 Science. I have a hunch that the pattern has a statistical rather than ecological explanation, but I can’t figure out the statistical explanation. I’m just fumbling around, not really knowing where to start, because I don’t know enough math.
“For instance, once I really understood the Price equation from a technical perspective, I was suddenly much better at the “art” of spotting “creative” applications of it.” Definitely. I think one can see most of the things you listed as approaches to generality as being feasible, at least in principle. Though there is a question if individuals recognize this – despite having multiple degrees in mathematics, I still have trouble groking many statistical things and this may be just a self-fulfilling prophecy. I have even taught a course on stats and just have trouble visualizing myself using anything beyond simple analyses, and I suspect this is precisely just due to lack of experience using it.
In terms of feasibility, I have in mind things like the entire field being far more mathematized than it is, such you you would be able to develop the mathematics needed for statistical attractors yourself. I don’t think this is necessarily sensible, partly because real constraints (finite time, resources, etc) prevent the entire field from being very theoretically strong and also still as competent at all of the other aspects of important ecological work. It isn’t necessarily a zero-sum game, and as you suggest there have been huge shifts towards simulations/theory which have enhanced, rather than detracted from, important fieldwork and other experimental approaches. But I do think there are tradeoffs here that will eventually hit limits – you guys do an excellent job illustrating these with choices about different curricula for instance.
I find it fascinating that simple theoretical models are so popular, but model systems are not. Moving to a model system is the next step after showing something can occur in a mathematical model. If you go straight to big, long-term field studies and global analyses and don’t find the result the model produced, it’s rare that you’ll understand why. Model systems, microcosms, and semi-field experiments are important for the same reason why mathematical models are, and yet it appears that there must be a bunch of people who think theoretical models are very important but model systems are not. I wonder if this has something to do with the current state of undergrad/grad education. We tend to have courses in field ecology and theoretical ecology, but do we have courses where students design microcosm experiments?
“I find it fascinating that simple theoretical models are so popular, but model systems are not. ”
Ha, I just had the same thought this morning! 🙂
” do we have courses where students design microcosm experiments?”
Here at Calgary we did for a few years! The labs in our upper level Population Ecology class involved the class working together to design, conduct, and analyze a protist microcosm experiment. We don’t do it any more, for various reasons.
I look at the preferences and I see three themes.
a) physics (frameworks, simple models, laws/patterns)
b) massive field work (long term & distributed studies and meta-analyses)
Ecologists don’t like:
c) approaches common in other complex systems (rest-of-biology, social sciences) don’t get much love (model systems, statistical attractors, analogies).
Although you could argue physicsts use statistical attractors.
I hate to be a judgmental pessimist, but this does not leave me with an impression ecologists are thinking through in a top-down way what attacks are most likely to work in their systems.
I was wondering if somebody would suggest an “ordination” of these results! I was struggling to come up with one myself. Yours seems plausible to me. On the other hand, it’s interesting that the PC axes don’t actually pick out your a or b. You can’t summarize the variation among the poll respondents very well with a “degree of interest in physics-style approaches” axis and a “degree of interest in many-site, long-term field work” axis.
I agree that ecologists mostly haven’t thought in a top-down way about which approaches to generality do or don’t work in any particular case. I tried to get at this a bit in that old post on the many roads to generality in ecology, which is now a paper forthcoming at a philosophy journal. But there’s much more that could be (and I’m sure has been) said, for instance on what makes for effective analogical thinking in science.
Thank you – as ever, much to ponder in there.
Maybe low use of distributed experiments is a funding:collaboration challenge that limits uptake to ‘big leader’ projects?
“Maybe low use of distributed experiments is funding”
Hmm. I don’t think so, though of course it depends on exactly what experiment you want to run. NutNet was originally funded by a single garden-variety NSF grant, plus small “in kind” contributions from the participants. Several other distributed experiments I know of were funded similarly. Distributed experiments are, or can be, cheap.
Well cheap in $ but fairly expensive in social transaction costs. Getting 20 people coordinated to write a paper is hard enough – I’m not sure I’m brave enough to try to coordinate 20 people into a grant and experiment.
Interesting comment Brian, you surprised me. Are you more daunted by the thought of trying to coordinate 20 people to run a distributed experiment, or the thought of trying to coordinate a 20 person working group? Just offhand, I find the two equally daunting. I can certainly see some differences between organizing a working group and organizing a distributed experiment. So I can totally see where someone might find one more daunting than the other, depending on one’s own skills and interests. But I’m not sure I see why one would just be *intrinsically* harder to organize than the other, for *anyone*.
I basically agree that getting 20 people organized into a field experiment is not innately harder or easier than getting 20 people organized into a working group (I say only having done the latter half, but my impression is that they’re probably not that different in dynamics).
But the notion of having 20 people at 20 institutions working on a grant together gives me nightmares. I suppose it might not be necessary – get 3 people from 3 insttiutions to give it interinstitutional credibility and put the rest in as participants or subcontractors.
But my main point is just that $ is not the only cost to assess. You could say the same thing about working groups. They’re cheap – $50-$150K for all out of pocket costs. It is the time invested by 20 people compounded by everything taking longer due to the social inefficiencies that add up and mean you better be sure you’re doing something worth the non-$ expense.
And vis distributed field experiments – I think it is a minority subset of ecologists who can pull off leading a working group and I expect that is true for distributed field experiments as well. THose “soft” skills that enable big science are not something we train for and are increasingly in demand but still relatively rare.
“But the notion of having 20 people at 20 institutions working on a grant together gives me nightmares.”
My impression is that NutNet’s way around this was for the initial grant authors to be a much smaller group of people, who then invited others to join the already-designed and already-funded project. A NutNet leader once described it to me as a matter of setting up framework or “core”, that people with diverse interests can join and build on in creative ways (though they’re not free to change the basic framework or “core”). I really like that way of putting things. I once described NutNet as top-down, centrally-coordinated science, but that’s not quite right. It’s actually a mix of top-down, centrally coordinated science and bottom-up, individual-investigator-driven science. The key is that the top-down, centrally-coordinated bits are what provide the foundation that individuals can then build on in creative ways. Without that shared foundation across all sites, their individual creative ideas wouldn’t be able to get off the ground at all, or wouldn’t have a generalizable context.
Agree that the “soft” skills to do this aren’t skills that every ecologist has, or ought to try to acquire, though we probably should try to do a better and more systematic job of offering training in them to anyone who wants it. I certainly don’t have the soft skills to pull of a NutNet! And the one working group I led was only modestly successful (one good paper, but just the one; most of the work we did ended up in the metaphorical file drawer.)
Pingback: What unsolved problems will (or should) ecologists focus on in future, and how would you identify them today? | Dynamic Ecology
Pingback: The story and lessons of the NutNet experiment: an interview with Elizabeth Borer | Dynamic Ecology