The importance of Grafen’s project is debated. Grafen himself views it as fundamentally important, of course. My impression is that few share his view. But my casual impression might be very wrong or outdated; it’s not an area I’ve looked at for many years.

]]>There are applied math papers about ecologically-inspired models, that are written for mathematicians. Interest centers on the mathematics, not its real-world ecological implications. I have a colleague here at Calgary in the math dept who does that sort of work. For instance, studying the behavior of a delay-differential equation model of density-dependent population growth with an *infinite number of delays* (!) I don’t know anything about the math you need to analyze the behavior of that sort of model, but I can imagine that it might be more exotic than anything you might see in, say, Theoretical Ecology or TPB or Ecological Modelling or Am Nat.

Closer to home, it’s my outsider’s impression that the math required to solve integro-differential equations and partial differential equations is more difficult than that required to solve regular ‘ol ordinary differential equations. So I’d guess that some of the more “exotic” math in theoretical ecology is in areas where such equations naturally arise. Some areas of spatial ecology for instance–think of work from folks like Mark Kot and Mark Lewis and many others. And also some structured population modeling–think work from Andre de Roos and many others.

Nonlinear stochastic processes might be another area that involves more advanced math than the typical theoretical ecology paper?

I’m just guessing, really, hopefully actual theoreticians will chime in!

]]>Pretty sure it’s a random tangent. π Proofs in ecology hardly ever involve new mathematical tools.

]]>https://galoisrepresentations.wordpress.com/2017/12/17/the-abc-conjecture-has-still-not-been-proved/

This sort of thing does happen from time to time. Definitely very rarely does one get new progress in pure mathematics from applying the standard techniques of the area. But I think most big advances come with small bits of novelty in the field across many people (for instance, Perelman’s work on the Poincare conjecture built heavily from work by both Hamilton and Thurston decades earlier).

]]>The example I’ve discussed in old posts, and in my TREE paper on the IDH, is the “flip flop competition” model of Chris Klausmeier. That’s a perfectly valid mathematical model of resource competition between two species, with the identity of the competitively dominant species switching back and forth as the environment switches back and forth between two different states. The identity of the dominant species switches because the species’ per-capita feeding rates depend on the state of the environment. In that model, it turns out that intermediate frequencies of switching promote stable coexistence of both species, by generating a storage effect. This is of course the same prediction as Hutchinson’s famous verbal argument about the same scenario–intermediate frequencies of environmental change that switch the identity of the competitive dominant lead to coexistence. In Chris’ view, his mathematical model validates Hutchinson’s verbal argument. Not that he thinks Hutchinson verbally intuited the storage effect, of course. But he thinks Hutchinson’s instincts were basically right, he was just wrong about some of the details of the argument, details that the math allows us to fill in. I disagree with Chris, because I don’t think those details are mere details, they’re the whole argument! The coexistence mechanism at work in Chris’ model has nothing to do with the one in Hutchinson’s verbal argument. Hutchinson thought that intermediate frequencies of environmental switching promoted coexistence by interrupting competitive exclusion “just in time”. Which is just wrong, that’s not at all what’s going on in Chris’ flip flop competition model (see here for an analogy: https://dynamicecology.wordpress.com/2012/04/10/zombie-ideas-about-disturbance-a-dialogue/). Probably the best illustration that Chris’ model is totally different than Hutchinson’s is that, if you modify Chris’ model by making species’ per-capita mortality rates environment-dependent instead of their feeding rates, you still switch the identity of the competitive dominant when the environment changes states, but you no longer get stable coexistence because you don’t get a storage effect.

In summary, if you make a completely incorrect verbal argument that, by sheer luck, happens to nevertheless arrive at the correct conclusion, I don’t think a correct mathematical argument that arrives at the same conclusion “validates” or “confirms” your verbal intuitions.

An analogy: imagine I set out from point A with the goal of arriving at point B, and I travel by turning in a randomly chosen direction at each intersection. By sheer chance, I arrive at point B. You also set out from point A with the goal of arriving at point B, but you follow accurate directions provided by Google Maps. The fact that Google Maps got you from point A to point B doesn’t somehow confirm that I basically knew the way from point A to point B, or validate my method of getting from point A to point B.

I don’t imagine that this analogy will convince many people. π In general, I find that ecologists are firm believers in their own intuitions. Which is understandable. After all, when you (or me!) try to reason your way through a problem verbally, it doesn’t *feel* to you like you’re making mistakes. It doesn’t feel like you’re doing the equivalent of trying to drive from A to B by making randomly-chosen turns. Your reasoning process feels logical and plausible to you. But that’s precisely the problem–your reasoning process will *always* feel logical and plausible to you, even when you have made a mistake.

Now I’m wondering if it would be interesting to do a compilation of “incorrect scientific arguments leading to correct scientific conclusions”. Might be an interesting comparative exercise.

]]>Ha, I was just thinking of the same example just now! π

Fortunately, I do think there are a lot of conclusions in theoretical ecology and evolution that are pretty robust to changes in modeling assumptions. For instance, metapopulation persistence time peaks at intermediate dispersal rates. That’s qualitative claim is true in at least a dozen different models (Yaari et al. 2012 Ecology).

Of course, those simple metapopulation models don’t differ in *every* respect. For instance, they all have density-independent dispersal. When making a robustness argument, it’s always good to be clear about what features of the model are being varied, and which are being held constant. William Wimsatt has an old paper critiquing the robustness of the 1970s theoretical claim that group selection was implausible. Wimsatt argued that the various group selection models of the time differed in various ways but yet shared a key assumption. It was that key shared assumption that led to the conclusion that group selection wasn’t a plausible evolutionary mechanism. The risk with robustness arguments is that you get lulled into a false sense of security if all your various models share some key assumption and you don’t recognize that fact.

“I have a colleague in fisheries who claims that a single constant widely used for fecundity in fisheries models for 50 years, when traced back through the literature, was just a wild guess pulled out of thin air. But once it was cited 100s of times it had a gravitas that nobody questioned.”

Wait, what? I’m gonna have to ask my fisheries ecology colleague about that one.

]]>