If we already understood community ecology, would we even know it? (includes poll)

Recently, I happened across this old post from psychologist Tal Yarkoni, asking how we would even know if we “understand” the brain. His motivation for asking the question is the observation that, if you ask neuroscientists if they understand the brain, they’ll say “no” and emphasize how little they know about the brain. But yet, many thousands of smart people have been studying the brain for over 100 years. Individually and collectively, they’ve learned a lot! Which suggests one of two possibilities. First, that collectively we do understand the brain–but that no one individual understands the brain (or recognizes the existence of our collective understanding). Or, that we will never understand the brain, individually or collectively, because that’s impossible. For instance, because the questions we’re asking about the brain are ill-posed and so don’t have answers, at least not the sort of answers we’re looking for.

Question: is the same true of community ecology? Anecdotally, many community ecologists are always banging on about how complicated and idiosyncratic ecological communities are, how there are so many basic things we don’t know about them, how we can’t predict many of their features with any precision, how we don’t have any good general theory of community ecology, etc. But yet, lots of smart people have been studying what we now call community ecology since before the term “ecology” was coined over a century ago. For instance, substantial chunks of the Origin of Species concern topics that now comprise part of community ecology. And like any community ecologist, I could and do spend many hours telling other people things that I know about community ecology. I teach classes, I write papers, I give research seminars, and so on. So does that mean that, collectively, we already do understand community ecology, even if no individual community ecologist would cop to understanding community ecology? Or does that mean we’ll never feel like we understand community ecology, because it’s not clear what it would even mean to “understand” community ecology, or have a “theory” of community ecology, or etc.?

Note: I personally would actually say that I understand a fair bit about community ecology, and that community ecologists collectively understand even more. Does that make me unusual? Let’s find out! Take the two-question poll below.

Related old posts:

Synthesizing ecology

Are there inherently complex ecological phenomena? (aside: that’s one of the first posts I wrote back when I was blogging for Oikos Blog, and remains a personal favorite. Man, I used to be a good blogger.)

The many roads to generality in ecology

Why ecology is hard, and fun: multicausality

 

32 thoughts on “If we already understood community ecology, would we even know it? (includes poll)

  1. Jeremy, I think it’s a problem when a fundamental goal of science is to better understand the natural world and we are comfortable saying that it’s not clear what it would mean ‘to understand’ something. It implies that there is no way to measure understanding. Is this because the problem of measuring understanding has been exhaustively addressed and we have concluded it is intractable? It seems like a question epistomology would be grappling with but I don’t know if quantifying understanding has been part of the program. Jeff

  2. THe notion that collectively we have it figured out, is appealing, but it is really a very reductionist world view that we just need to know all the parts and put them together. We have debated this before Jeremy, but assembling pieces will tell us some things (e.g. strong pairwise interactions) but I very much doubt they will advance us towards more emergent phenomena like species richness.

  3. In my current standard seminar talk I define community ecology as the following: I give you a list of species. I tell you a set of environmental conditions. You study & measure anything you want about those species in isolation from each other. Then you predict the resulting community if all these organisms are thrown together in the specified environmental conditions.

    Even that is an oversimplification. EG it ignores preemption effects (you could fix this by I tell you what order the list is added to the community). But I have always thought that was pretty close to the mission statement of community ecology.

    • Yes! And right now I would say that our ability to do this is mediocre at best, so we have no more than a rudimentary understanding of community ecology.

    • Just to clarify – I agree with (and assumed) like Matthew Holden noted below, that predict the resulting community includes all the things ecologists like to study about community properties like:
      – relative abundances
      – total community size
      – species richness
      – temporal patterns (e.g. in abundance)
      – probabilistic statements of occupancy and probability distributions of abundance

      If you accept this definition you have to vote pretty low on our understanding.

      A simpler but still high hurdle would be a perturbation theory. Study an existing community and one non-member species in isolation as much as you want. Then predict what happens when you add that species at a certain time in certain numbers. Or study an existing community and each species in isolation as much as you want. Then predict what happens when the climate changes.

      To me, even though we are far from achieving these goals, these kind of thought exercises are suggestive about what heading towards a solution looks like. In particular we are going to have treat a fair amount as stochastic and move to more probabilistic claims and stochastic models. I don’t personally believe we could ever have a deterministic solution to the above kinds of challenges.

  4. I think a more relevant question would be “How much are we capable fo understanding about the ecology of communities?” I suspect it is a little bit like the weather or the climate. We can predict soem broad features and trends,a nd may understand the general scaffolding of communities, but ultimate understanding of all the details may elude us.

  5. According to early poll results, the entire premise of this post is wrong. Community ecologists are *not* like neuroscientists; they mostly do not say that they don’t understand community ecology. A substantial majority of respondents so far say that they personally understand community ecology “somewhat” (3 on a 1-5 scale). And a substantial majority say that community ecologists collectively understand community ecology even better than that (4 on a 1-5 scale).

    So here’s an interesting comparative question for sociologists of science: which scientific fields or subfields are most self-confident about their own understanding of their subject matter, and which are least self-confident?

    • I wonder if this would be more a difference between subfields that are more applied science versus basic science?

    • The poll responders are way too optimistic. I went 2 and 3, and think I probably should have even gone one lower on each [I’d probably go 2, 3 on a 10 pt scale, just don’t want to give a one since I think both I and community ecologists know something].

      If we take Brian’s definition, in the comment above (which I think is pretty darn good) I have a hard time believing anyone could go higher than a 3.

      I’d add to Brian’s definition that “you predict the resulting community [through time and space, by either specifying probabilities of occurrence, relative densities, or abundances]” And I don’t think anyone is even remotely close to being able to do this.

      • Lol, having given a 4 I may be optimistic in part because I tend to work with long-term data in populations or communities dominated by 1-2 species. There are quite a few of those out there (adjusting for scale separation)! It may also be because I define understanding as the fraction of “predictable” variance that we can explain given limitations of available data – else, you might say weather predictions will be terrible in near-perpetuity because we can’t predict > 2 wks out.

        broader PS: scientists might also think they know little because they focus, by definition, on the unknown, so perceived knowledge might increase with with how applied the (sub)field is or the speed of progress.

      • That’s a very interesting point, Vadim! I’m not sure what the appropriate timescale is for weather vs ecological dynamics. 2 weeks out for weather is pretty long term, maybe that’s like 2 centuries for ecological communities? A given region (say the size of a Hawaiian island) will have for example many clouds moving in and out in an hour, whereas turnover in communities is not going to be that rapid. I’d give knowledge of the weather a 4 by the rating system we are using and community ecology (by community ecologists a 2). I’d suspect you’d give the weather a 5, so we probably aren’t so far off in a relative sense. I’m just a bit more pessimistic when I talk about understanding something. Its probably from my background – in my field you don’t understand something until you can prove it as a theorem and teach the proof to others ;). That’s extremely high standards for knowledge.

  6. Why does studying something for hundreds of years mean we either already understand it or will never understand it? In 1850 someone would have said people have been studying how light propagates for 200+ years and they still did not understand it (they really had not figured it out), but now I think we would say we do….

    • Yes, the post makes an implicit assumption about the timescale of scientific progress. I think that assumption is reasonable for fields like ecology. It might be debatable for physics, at least prior to the 19th century.

      Which raises an interesting (and difficult) question: for how long does progress towards increased understanding need to cease before we worry that the field has gone off the rails? Particle physicists are worrying about this right now, having been stalled for a few decades: https://dynamicecology.wordpress.com/2018/08/30/book-review-lost-in-math-by-sabine-hossenfelder/ And we have an old post (sorry no time to search for it just now) in which Brian and I discussed whether ecology is progressing as fast as it “ought to”, or fast “enough”.

  7. Via Twitter:

    • I love this quote and so went searching for the source. It appears to be Bertrand Russell (https://en.wikipedia.org/wiki/The_Unreasonable_Effectiveness_of_Mathematics_in_the_Natural_Sciences)
      Physics is mathematical not because we know so much about the physical world, but because we know so little; it is only its mathematical properties that we can discover.
      — Bertrand Russell[18]

      Immediately following, I found this quote
      There is only one thing which is more unreasonable than the unreasonable effectiveness of mathematics in physics, and this is the unreasonable ineffectiveness of mathematics in biology.
      — Israel Gelfand[19]

      • Those are two great lines! Can’t believe I didn’t know either until now.

        As a Price equation fan, I’m still partial to Russell’s line about how a good notation has a subtlety and suggestiveness that make it seem almost like a live teacher, and that notational irregularities often are the first sign of philosophical errors. (going by memory rather than quoting exactly, but that’s pretty close). The Price equation can be thought of as clever notation.

      • I agree about notation. It is really just a terse form of writing. Like writing it can have personal style and takes effort to do well. And like writing it can create immense cognitive hurdles to understanding what is being said or make the truth practically leap off the page.

        I would also agree the Price equation is little more than clever notation which is why it has had so much impact.

      • “which is why it has had so much impact.”

        Has it? We could probably have an inconclusive but interesting discussion of whether the Price equation has had a lot of impact!

  8. Speaking of understanding the human brain: Ed Yong has a good piece just out about how, 10 years ago, a top neuroscientist said he was going to simulate a human brain within 10 years (https://www.theatlantic.com/science/archive/2019/07/ten-years-human-brain-project-simulation-markram-ted-talk/594493/). The European Commission gave this project a 1 *billion* euro grant. The project failed, at least in terms of the stated goals of simulating a human brain within 10 years. Click through for quotes from neuroscientists complaining that, even if the project had succeeded, it would’ve been useless because it wouldn’t have helped us understand the brain.

    As an aside, the contrast with the Human Genome Project is interesting. Both projects had many critics from the start, who accused both of being question-free and therefore pointless. But the Human Genome Project did sequence the human genome, and many fewer people these days complain that it was a waste of money or question free or whatever. Does that show that critics of the brain simulation project were right but the critics of the Human Genome Project were wrong? Or does it show that, “if you build it, they will come”? That is, if you set out to do expensive scientific/engineering thing X, and you do it, most people will declare it a success? Any analogy to NEON (and the IBP) is left as an exercise for the interested reader…

    • Even more off-topic question: is there some optimal non-zero amount of over-optimistic hype for science? That is, in order for governments and foundations to support the socially-optimal amount of basic scientific research, does some (all?) proposed research need to be overhyped, in terms of the likelihood, immediacy, or magnitude of its benefits?

      I know there’s a bit of work on analogous questions in the animal behavior literature. When will selection favor “optimistic” habitat selection behaviors, for instance? “Optimistic” behaviors being movement behaviors based on overestimation of how beneficial it would be (on average) to move to some new location. When it is good, not just for the individual scientists doing the hyping, but for science as a whole, for some proposed lines of research to be overhyped?

      A related question is: how come overhyping is even possible? Why don’t *all* those on the receiving end of the hype see through it?

    • Well the HGP was getting large data. The brain simulation was modelling via computer simulation. IBP was largely modelling via simulation (it certainly had data collection but it was channelled to get very specific pieces of data for modelling, not to systematically collect unknown data across a continent or something). I still remember my advisor gleefully tell me how one biome project built the simulation, pushed the button and predicted the biome (I think it was prarie) would be knee deep in dung which is obviously not close to true.

      One could wildly extrapolate a pattern of systematic data collection=good, simulate the world=bad. That might imply hope for NEON although it is not a very comprehensive sample (one site, many samples per biome). I will be curious to see about NEON in the long run. Similarly there is a connectome project trying to identify every neural connection in the brain. One could argue that will be more analogous to the HGP than the brain simulation. Curious to see how that turns out too although must neuroscientists I know are skeptical of the value.

      And to be fair I think it is important to note that although nobody says the HGP was a waste of time, it is clear it was massively oversold. The instant understanding of many new genes, gene regulation, cancer genes, etc have not materialized out of the HGP. Work on gene variation, the proteome and many other projects were to some degree enabled by the HGP but there is a lot of work left to do. The HGP was more of a foundation/starting point than the end point initially implied.

  9. “First, that collectively we do understand the brain–but that no one individual understands the brain ….Or, that we will never understand the brain, individually or collectively, because that’s impossible. ”

    I’m puzzled as to why it has to be “either…or”. We can know a lot and still know relatively little.

  10. I think community ecologists collectively have made plenty of advances over the timeframe to understand more about general principles/knowledge, but still lots to learn on details/contexts. Especially in this time of unprecedented environmental change, many old ‘rules’ may shift under novel conditions. Insects are a great example – lots of fundamental community ecology principles that are ‘well-known’ based on plants, mammals, birds etc., less understood about insect community dynamics.

    Will we ever know it all? Who knows, and maybe the process of adaptive learning is more exciting!

    • Manu, I hear this often that we have the general story but just need the details. What are those general principles that we know and understand? And it seems to me that real understanding of a system allows us to forecast what will happen under novel conditions – fundamental principles don’t always or even often, change under novel conditions. That is, the rules of the game don’t change under novel conditions, they just lead to outcomes we haven’t seen yet.

  11. We are often “expert” on a certain study system or process and have some predictive power in that area. Transferring that understanding to other systems turns often out to be difficult suggesting idiosyncrasies are wide spread. Having said that, some pattern have already been found to have strong predictive power (e.g. island bio-geography, diversity – productivity relationships (in plant communities?))
    Personally I think it is too early to concluded idiosyncrasies are so common we can not understand community ecology and the fact that lots of people have been “thinking” community ecology before us has no bearing on the issue. For once, they didn’t have the same tools. Combining community structures with their evolutionary (and ecological) histories to discover dependencies and tools to control for them have only been available for 2 or 3 decades? We should be able to assess whether such varying histories underlie what we now call idiosyncrasies and whether it will resolve those differences at least to some degree. I live in hope and will help provide data to test hypotheses around phylogenetic dependencies and community structures.

  12. I’m late to this, but if you and Brian (and others) haven’t seen this article some years back, it may be a useful way to tackle how difficult “real understanding” (whatever that means) is hard to come by, even when we know all of the details of the system.

    https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1005268

    Also the below excerpt from Murray’s book on Mathematical Biology seems useful, if you haven’t read it (sorry – I can’t just quote a small part of it!)

    “How complicated should a model be? Consider the task of explaining to someone how a clock works. It would help, of course, if they understood the mechanics of gears and levers; however, to understand the clock you would have to simply describe it: this gear turns that one, and so on.

    Now this is not a very satisfactory way to understand a phenomenon; it is like having a road map with a scale of one mile equals one mile. ‘Understanding’ usually involves some simplified conceptual representation that captures the essential features, but omits the details or secondary phenomena. This is as good a definition as any of what constitutes a model.

    Just how simplified a model can be and still retain the salient aspects of the real world depends not only on the phenomenon, but how the model is to be used. In these chapters on mechanical aspects of morphogenesis we deal only with mathematical models; that is, phenomena which can be cast in the form of equations of a particular type.

    Mathematical models can be used to make detailed predictions of the future behaviour of a system (as we have seen). This can be done only when the phenomenon is rather simple; for complex systems the number of parameters that must be determined is so large that one is reduced to an exercise in curve fitting. The models we deal with in this book have a different goal. We seek to explain phenomena, not simply describe them.

    If one’s goal is explanation rather than description then different criteria must be applied. The most important criterion, in our view, was enunciated by Einstein: ‘A model should be as simple as possible. But no simpler.’ That is, a model should seek to explain the underlying principles of a phenomenon, but no more. We are not trying to fit data nor make quantitative predictions. Rather we seek to understand. Thus we ask only that our models describe qualitative features in the simplest possible way.”

  13. There are two schools of thought about models that I have never completely understood and yours is one of them, Andrew . This may be because my thoughts on this are constrained by my experiences in ecology where we are trying to explain variation in characteristics of ecosystems that are usually quantitative or that take different states (e.g. species richness, abundance, biomass). The idea I struggle with is that the ‘best’ model is one that doesn’t capture all the complexity of the phenomenon being modeled…that, somehow, we are better off knowing less than all there is to know. I get how there may be logistical or practical reasons for preferring a model that is simpler than the ‘true’ model but other than logistical reasons why would we ever prefer to know less than there is to know? If we have full understanding we can condense to simpler models when we need to. However, if our total understanding is something less than complete we can never move up to the more complex (but truer) model.
    In my opinion, the map example demonstrates clearly that the only reasons for preferring simpler models are logistical. When maps were paper we would use coarse-grained maps when we could to save space in the car. But we would often supplement our map of Canada with finer-grained maps (of provinces) when we needed to. And those provincial maps were available because we had the information available to make those more complex models. As soon as the information was available on a small electronic box and we could move among scales with a tap of the screen we wanted finest grained spatial maps that we could get.
    My opinion is that we always want to be aiming for the ‘true’ model and if we get there we can condense when necessary.

    • I agree to a point – namely, I think if one can meaningfully use a “truer” model then we should. However even with maps there are advantages to having the full fine grained map, and the ability to “zoom out” and see cities, countries, and continents. These coarse grainings are just as valuable as the fully zoomed in maps, at least for some applications.

      I think there are two reasonably strong arguments for simple models. Firstly, while we may be aware of very complex models which we hope are in some sense more authentic or closer to reality, in practice we can fit simpler models better, as well as take away deeper understanding. Cellular automata, for instance, are great models that allow us to put arbitrary complexity into individuals and how they interact, so should in principle be the de facto standard for a lot of ecological settings. In practice, they are difficult to use to answer simple questions about harvesting or temperature changes etc, not just for logistical or computational reasons. The casual chain of what influences what becomes my harder to disentangle despite having already abstracted reality into a simulation. There are also more pieces to assemble in terms of assumptions and accounting for modelling error, so that often a simpler model can be actually more realistic than a more complex one with poorer assumptions.

      Secondly, simple models can often give important qualitative insight which can lead to the develop of more complete pictures of reality. F=ma is “wrong” for several reasons (point particle, classical, etc), but it was still a crucial step in building more realistic models of reality. Such hierarchies of models can be, in some sense, good indicators of a theoretically well understood process. Simply because a phenomenon is multicausal does not mean it isn’t important to understand causal links, and these individual links can be used to more and more accurately describe reality.

Leave a Comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.