Poll results: the many ways ecologists seek generality (and why some are much more popular than others)

Ecologists study lots of different things, and those things vary a lot in all sorts of ways. No two sites/times/organisms/populations/communities/ecosystems/species/landscapes/food webs/whatevers are exactly alike. Should ecologists seek generalities about the diverse, heterogeneous stuff they study? If so, what kinds of generalities should they seek and how should they seek them? Those are questions on which prominent ecologists have long disagreed. But the views of a few prominent ecologists might well be unrepresentative of the views of ecologists collectively, for the same reasons that the loudest voices in any discussion might not be representative of the views of everyone in the room. So to get a bit of data, I polled y’all on these questions. Here are the results!

As with all our polls, this isn’t a random or representative sample of all ecologists, or even of all readers of this blog. But it’s a big and diverse enough sample of ecologists to be worth talking about. I found the results very interesting, and a couple of them surprised me. They were so interesting I even went to the trouble of doing a PCA and making a pretty graph, as if I was Meghan or something. 🙂 So you should totally read on.

Continue reading

A brief history of ecologists’ disagreements about “generality”, in quotes

One very natural response to my poll from earlier this week on ecologists’ views about generality is, wait, don’t we all seek generality in our own work? After all, every theoretical and empirical paper every ecologist writes has a passage discussing whether/how the results generalize to other circumstances–other sites or times, other species, other models making different assumptions, etc. So isn’t a poll about ecologists’ attitudes towards generality just going to reveal a boringly-high level of agreement? Like asking people whether ice cream is good?



I actually do agree–and the poll results so far confirm–that almost all ecologists care about generality in some sense or other, and pursue generality in their own work using some approach or other. But I also think–and the poll results so far confirm–that there’s substantial variation among ecologists in what specific kinds of generality they seek in their own work, and value most in the work of others. I did the poll to learn more about that variation.

I’m not claiming any special prescience here. I doubt it will surprise anyone familiar with the history of ecology to learn that ecologists disagree a fair bit about exactly what forms of “generality” to seek, and how to seek them. For instance, here are a bunch of quotes about generality from prominent ecologists past and present. Try to find any generalities about them–besides the fact that each disagrees with most of the others!

“To do science is to search for repeated patterns, not simply to accumulate facts.” – Robert MacArthur (1972)

“Unlike population genetics, ecology has no known underlying regularities in its basic processes…” – Leigh Van Valen and Frank Pitelka (1974)

“The very most important thing to me, being a scientist, is to seek out unification.” – John Harte (2014)

“I think of ecology as a library of well-developed case studies.” – Tony Ives (2014)

“General ecological patterns emerge most clearly from this glorious diversity when systems are not too complicated…and at very large scales, when a kind of statistical order emerges from the scrum. The middle ground is a mess.” – John Lawton (1999)

“Community ecology is often perceived as a mess, given the seemingly vast number of processes that can underlie the many patterns of interest, and the apparent uniqueness of each study system. However, at the most general level, patterns in the composition and diversity of species–the subject matter of community ecology–are influenced by only four classes of process: selection, drift, speciation, and dispersal.” – Mark Vellend (2010)

“[T]here are several very general law-like propositions that provide the theoretical basis for most population dynamics models…Some of these foundational principles, like the law of exponential growth, are logically very similar to certain laws of physics” – Peter Turchin (2003)

“[W]e don’t need no stinkin’ laws” – Bob O’Hara (2005)

“These [previous] studies have provided more and better data on a wide range of ecological phenomena. There has not, however, been comparable conceptual progress in organizing and synthesizing existing information, producing mathematical models that are both realistic and general, and developing a body of ecological theory that can account for both the infinite variety and the universal features of organism-environment relationships.” – Jim Brown (1997)

“Our future advances will not be concerned with universal laws, but instead with universal approaches to tackling particular problems.” – Peter Kareiva (1997)

“The multiplicity of models is imposed by the contradictory demands of a complex, heterogeneous nature and a mind that can only cope with a few variables at a time; by the contradictory desiderata of generality, realism, and precision; by the need to understand and also to control; even by the opposing esthetic standards which emphasize the stark simplicity and power of a general theorem as against the richness and diversity of living nature. These conflicts are irreconcilable. Therefore, the alternative approaches even of contending schools are part of a larger mixed strategy.” – Richard Levins (1966)

It is of course possible that these haphazardly-chosen quotes are unrepresentative of the range of views among ecologists more broadly. Maybe it’s only ecologists who write opinion pieces about “generality” who disagree about “generality”! That’s why I took the poll, to find out. 🙂 Look for the results soon.

Poll: should ecologists seek generalities, and if so, how?

One of the most fraught questions in ecology is whether or how ecologists should seek generalities. Debates over this broad question crop up in many contexts and take on many forms. Think of debates over whether ecology has “laws” analogous to the laws of physics. Debates over whether ecologists ought to focus on producing information relevant to management or conservation of specific species or locations, rather than on less-useful generalizations. Debates over whether we should give up on doing community ecology because every community is an idiosyncratic special case. Etc.

In an old post, I tried to partially resolve some of these debates by suggesting that there are many different sorts of “generality” that ecologists might seek. I argued that they’re all valuable in their own way, though some might be more achievable and/or valuable than others depending on the goals and interests of the investigator.

But it’s my anecdotal impression that ecologists as a group value some forms of generality over others. And that different sorts of ecologists tend to seek, and value, different sorts of generality. To get some data, I hope you’ll take the short poll below (just four questions). It asks you about your opinions about, and own use of, various sorts of “generality” in ecology and ways of seeking generality. Here’s a brief summary of each of them:

  • Universal or nearly-universal patterns or “laws”. Think of quarter-power body size allometries, the latitudinal species richness gradient, the species-area curve, Bergmann’s Rule, etc. They provide generality because many different systems/species/cases fit the same pattern, or obey the same rule or “law”. (EDIT: as Brian points out in the comments, Bergmann’s Rule turns out to be more of a purported pattern or law than an actual one. It has too many exceptions to be a pattern or law. But I’m leaving it here as an example, because sometimes when you set out to study a purported universal pattern or law, you find that it’s not a pattern or law at all. But you still set out to study a (purported) pattern or law, so your research still falls in this category.)
  • Meta-analysis. That is, statistical summaries of the results of different studies of the same phenomenon. They provide generality in the sense that they tell us what’s statistically “typical” or common, and how much variation there is around what’s “typical”.
  • Simple theoretical models. I’m thinking here of theoretical models that are intended to “capture the essence” of the phenomenon being modeled, “sharpen our intuitions” about the phenomenon being modeled, or identify some general “principle” about the phenomenon being modeled. These models aren’t intended as realistic, exact descriptions of any particular system, in part because they assume away or make very simple assumptions about other phenomena besides the one being modeled. But these simple models are thought to provide generality because they apply in an approximate way to many different systems, or act as a simplified “limiting case” for many different systems. Think of the Lotka-Volterra competition model, the Rosenzweig-MacArthur predator-prey model, Tilman’s R* model of resource competition, the marginal value theorem of optimal foraging, etc.
  • Statistical attractors. Think of MaxEnt-type phenomena: empirical patterns that are common because they’re hard to avoid. The pattern is commonly observed (and so “general” in that sense) because many different ecological processes/mechanisms/scenarios would give rise to the pattern, and few would give rise to any other pattern. The pattern is thus a statistical “attractor”–a statistical inevitability. For instance, the fact that species-abundance distributions always have a lognormal(ish) shape may indicate that that shape is a statistical attractor. That shape is common because population growth is always a multiplicative stochastic process, and system-specific ecological details have little or no effect on the overall lognormal(ish) shape of the distribution.
  • “High level” theoretical frameworks. Think of modern coexistence theory, the Price equation, and the “four fundamental forces” framework of population genetics (selection, drift, mutation, migration). These “high level” frameworks provide generality in the sense that they unify and subsume various models as special cases.
  • Fruitful analogies. I think of this as the Tony Ives approach to generality. For instance, lots of different ecological systems have been hypothesized to exhibit alternate states, with stochastic perturbations occasionally flipping the system from one state to the other. In the linked post, Tony Ives talks about how his previous experience with alternate states in other systems helped him recognize and model the possibility of alternate states in the population dynamics of Icelandic midges. That is, there are analogies one can draw between those midges, and other systems (including even non-living systems!) that exhibit alternate stable states. Those analogies are a form of generalization–they allow us to reinterpret our knowledge of one system so that it applies to another, analogous system. As another example, think of how Steve Hubbell’s neutral theory, and Mark Vellend’s theory of ecological communities, both start by drawing an analogy between community ecology and evolutionary biology.
  • Model systems. A model system is one that has features that make it particularly tractable to address the question of interest. Studies in model systems often are thought, hoped, or assumed to apply to other systems as well. Sometimes because the question of interest isn’t tractable to address in any non-model system.
  • Distributed experiments. That is, experiments that run simultaneously at many sites, using the same methods everywhere, so as to facilitate cross-site generalization. One could consider this a special case of meta-analysis, but I split it out because I feel like many ecologists tend to think of it as its own thing. Think of NutNet, for example.
  • Long-term studies. A long-term study should capture a greater range of temporal variation than a short-term study, and so should provide insights and conclusions that can be applied in a greater range of circumstances.
  • Other. No doubt my list of forms of generality and ways of seeking generality is incomplete, at least in the eyes of some of you!

Looking forward to your responses!

Should we judge a scientific field by its classic papers, its typical current papers, or its best current papers?

How should we judge a scientific field? One way would be via typical current papers in the field. After all, that what most papers are, almost by definition: typical. So if the typical paper in a field is a good paper, however you define “good”–it addresses an interesting question, uses technically sound methods, etc.–then the field as a whole presumably is doing well.

But that’s not the only way to judge, and maybe not the best way. I was prompted to think about this because of a passage in an old Anthony Lane essay on bestsellers (boldface emphasis added):

It is easy to brush aside best-seller charts as the product of hype and habit, but they are a real presence in the land of letters, generating as much interest as they reflect. And if they do, to an extent, represent the lowest common denominator of the print culture, this only strengthens our need to pay attention, since where else is that culture common at all? ‘Twas ever thus: anyone who imagines that a hundred years ago Americans were rushing out to buy the newest Henry James is kidding himself…This is nothing to be ashamed of; it is a proper corrective to our historical arrogance, the conviction that the best writings of our time will, shored up by our plaudits, both outlive us and represent us in centuries to come. But they won’t; we may not even have clocked the real thing when it passed before our eyes. That is why the ideal literary diet consists of trash and classics: all that has survived, and all that has no reason to survive–books you can read without thinking, and books you have to read if you want to think at all. In between is the twilight zone, the marshes of the middlebrow…That is why we should turn to the Times [bestseller] list every Sunday morning. If the language is still alive down at this end of the market–if there is juice running through the art of basic narrative–then we have no cause to be downhearted. Conversely, if the list is crammed with John Grisham, then we can all go out to brunch and rue the decline of the West.

The suggestion here is that the best novels can only be identified in retrospect, by having repeatedly proven their value to new generations of readers. So if you’re not reading old stuff that we’ve long known is well worth reading, you might as well just read something fun. Trying to write a Classic Novel is likely to result in a novel that’s neither fun to read nor of lasting value. So if the bestseller list is dominated by books that don’t strive for Classic status, but merely get the basics right–readable sentences, engaging characters, compelling plots–then we should all be happy about the state of literature. It means that our novelists are writing, and the bulk of readers are reading, good “trash”. Which, in the moment, is as much as we can expect of either novelists or their readers.

Does an analogous argument apply to science? That is, rather than evaluating a field by looking at its typical papers, perhaps we should look at its classic papers, and the atypical recent papers that become the scientific equivalent of bestsellers–widely read and cited, even if they’re quickly forgotten. After all, the typical paper will be quickly forgotten without ever being much read or cited, so does it really matter all that much if it’s any good?

A related argument, due to economics columnist Noah Smith, is that “vast literatures” on a topic function as “mud moats”: an ocean of typical research papers that would take ages for anyone to read, but wouldn’t provide much actual knowledge or insight even if you somehow did read them all. Indeed, might actually be misinformative in some cases. Noah suggests a solution: the Two Paper Rule. If you can cite two outstanding papers that illustrate and exemplify the virtues of some larger body of literature, that larger body of literature might be worth reading. But if you can’t, then that larger body of literature is probably a mud moat.

One reason not to apply Anthony Lane’s argument to science is that scientific papers that go on to become classics often are widely-read and cited immediately upon publication (there are exceptions). That’s in contrast to literature, in which novels that go on to become classics often aren’t bestsellers when they first come out. Which if anything actually strengthens Anthony Lane’s conclusions in the scientific context. If future classic scientific papers are mostly among the “bestsellers”, then that’s all the more reason to read the bestsellers, and to judge scientific fields by their current bestsellers rather than by their typical papers. Isn’t it?

One further implication, which I’ve argued for in various old posts, is that it’s probably optimal for science as a whole for post-publication scrutiny to focus narrowly on a small fraction of high-profile, potentially-influential papers (recent example). Though I’m not a fan of going one step further and giving no or only cursory pre-publication peer review to most other papers…

What do you think? Looking forward to your comments.



Gender balance of the faculty and chairs of N. American EEB departments

As best one can tell from publicly available information, recently hired N. American (US & Canada) TT asst. professors in ecology and allied fields are 57% women, somewhat higher than their representation in the applicant pool. That’s a bit of good news, it represents real systemic progress in this one narrow area, and it hopefully facilitates systemic progress in other areas. For instance, it’s easier to achieve gender-equitable service loads, while also having diverse committees, if departments have more women faculty.

But of course, those data don’t tell you anything about hiring before 2015-16, and those past hiring decisions are still having consequences today. Nor do those data tell you anything about tenure and promotion decisions, either currently or in the past. And they don’t tell you who holds senior leadership positions like department chair/head. We can look at public data to get some insight. But public data are aggregated very broadly by field (for instance), and there is considerable variation among fields and even subfields in terms of representation of women.

So to get a snapshot of the current state of play at a systemic level in ecology & evolutionary biology (EEB), I looked up the gender balance of tenured and TT faculty in 23 N. American EEB departments (22 US departments plus Toronto), splitting the data by rank (asst., associate, or full professor).

Continue reading

Equity and diversity targets in science, part II: views depend on identity (guest post)

Note from Jeremy: This is a guest post from Françoise Cardou & Mark Vellend.


A few weeks ago, we came to you with a knotty lunchtime debate : are quantitative equity and diversity targets in science a good idea, and if so, on what basis? Unequivocally, certain groups of people face unequal challenges and barriers in science. Any specific policy measure to address this issue necessarily comes with both benefits and costs, and we wanted to find out what people (from a broad range of situations) actually think. We want to thank everyone who chimed in: you are now honorary participants in our lunch group. We return today with the results.

First, a brief recap. Our questions were motivated by changes to the diversity targets used in the Canada Research Chair (CRC) program, which have for a few years now been based on the “availability” of people from designated groups (women, Indigenous peoples, people with disabilities, and visible minorities) in the pool of candidates. Following a lawsuit from several Canadian faculty members under the Human Rights Act, the new CRC targets will now be based on the representation of people in these same groups in the general population*. As one example, the target for women will increase from 21% to 50% in the natural sciences and engineering. These represent two ways of setting quantitative targets, and we can imagine a range of rationales for different sets of such targets, each with a different goal and balance of cost and benefits (see the original post).

We intended first to find out whether people perceived quantitative EDI targets as a compromise between two forms of discrimination, and second whether there was any allocation policy around which there was broad agreement. Finally, a few obvious but important points: this poll is not a scientific study (it’s an online poll), and the results do not say anything about what is right or wrong, simply that some people feel this or that way. Should anyone like to have a look for themselves, you will find the full dataset here (please let us know if you dig deeper, we’d love to know what you find!).

Continue reading