In a previous post I suggested that the apparent overwhelming complexity and non-generality of community ecology (‘every community is unique’) isn’t real. Instead, it’s a matter of the level of description chosen by the investigator. The forest is always there to see, if we choose to stop describing it at the level of individual trees.
But is it always possible to see the forest for the trees, if we choose to do so? That is, are there ecological phenomena which are so complex, or are complex in such a way, that they are inherently complex? Phenomena for which no ‘general’ or ‘synthetic’ understanding is possible, no matter how we choose to describe them? I think there might be such phenomena, and in fact I have a candidate in mind. But before I describe it, I want to first clarify what I mean by ‘inherently complex’, using what I think is a clear-cut example from outside of ecology.
The example comes from chess, specifically the endgame (the phase of the game in which only a few pieces remain on the board). Chess endgames have been the subject of systematic study for centuries. This study has led to the development of a large body of work on endgame strategy. Much of this strategy takes the form of ‘general plans’ or ‘rules of thumb’. Given the pieces left on the board, and their current positions, we can make some general suggestions about how the stronger side should attempt to deliver mate, and how the weaker side should go about trying to prevent this. For instance, if white has king+bishop+rook, while black has king+bishop, one general plan for white might be to try to drive the black king towards a corner of the board of the opposite color to black’s remaining bishop (a bishop can only occupy squares of one color). This will restrict the movement of black’s king while limiting the black bishop’s ability to protect it, thereby allowing white to deliver checkmate. (I have no idea if that’s a good plan, since I’m not very good at chess, it’s just a hypothetical illustration of what I mean by a ‘general plan’)
But starting in the 1970s people began using computers to analyze chess endgames. Specifically, they used computers to assemble databases of all possible sequences of moves for a given endgame, starting from every possible position. And they discovered something remarkable (the following is summarized from chessplayer Tim Krabbe). For certain endgames, everything we thought we knew is wrong. For instance, endgames such as ‘rook+bishop vs. 2 knights’, which had long been thought to be drawn with best play, were actually forced wins for the stronger side from more than 90% of possible starting positions. But that wasn’t the real shock. The real shock was how long and complicated a series of moves was required to force mate. More than 250 moves in many cases–i.e. longer than almost every game of professional chess that’s ever been played. Indeed, so long that it would be illegal–the rules of professional chess include a ’50 move rule’, which declares the game drawn if no capture is made or pawn moved for 50 moves. The 50 move rule was invented in order to prevent one player from pointlessly extending the game in the hopes his opponent drops from exhaustion. But it turns out that even sequences of 250+ moves can have a point.
But not a point humans can grasp. And that’s what interests me here. These moves follow no ‘plan’ or ‘strategy’ or ‘rules of thumb’. They’re literally incomprehensible until the last 20 moves or so, when suddenly, like seeing a ship emerging from the mist, a human can start to see how checkmate will be given. Tim Krabbe says it better than I can:
“They [these long forced mates] are beyond comprehension. A grandmaster wouldn’t be better at these endgames than someone who had learned chess yesterday. It’s a sort of chess that has nothing to do with chess, a chess that we could never have imagined without computers. The…moves are awesome, almost scary, because you know they are the truth, God’s Algorithm – it’s like being revealed the Meaning of Life, but you don’t understand one word.”
It’s as if there are a bunch of trees, but no forest–at least not one humans can see from any vantage point available to them.
Are there any ecological analogues of lengthy forced wins in chess endgames? Ecological phenomena that are similarly impossible for humans to understand or explain, because no ‘synthetic description‘ is possible? I don’t know the answer, but rephrasing the question slightly makes it more tractable. What is it about certain chess endgames that makes them inherently complex, and do any ecological phenomena have those features?
The inherently complex chess endgames discussed above comprise lengthy sequences of events (moves) leading towards an endpoint (checkmate, or possibly a draw or loss if suboptimal moves are chosen by the stronger side). The sequences are lengthy for two reasons. One is that one side has only a slight advantage over the other, so it takes many moves to press that advantage home. The other is that the possible moves (and the optimal moves, which are a subset of the possible moves) are highly context-dependent–they vary from position to position, and thus with the previous moves. So the future ‘route’ to checkmate depends on ‘history’, the moves played up to that point.
This is very much like community assembly or secondary succession, at least under certain conditions. Under the right conditions, reaching some final state (an ‘uninvasible’ or ‘climax’ community) depends on a long sequence of ‘moves’ (invasions of new species and extinctions of residents). Different sequences of moves might lead to different outcomes (alternate stable states). The possible moves (which species can invade or are at risk of extinction) depend on the current ‘position’ (which species are present, at what abundances), which itself reflects the historical sequence of previous invasions and extinctions.
I freely admit this analogy isn’t perfect. But conversely, I don’t think it’s so imperfect that it can just be dismissed. For instance, the assembly process for one simple protist microcosm community has been ‘solved’ in much the same way chess endgames have been solved: exhaustive documentation of all possible sequences of ‘moves’ (Warren et al. 2003). The resulting ‘assembly map’ is very complex. Elsewhere the authors do develop some phenomenological ‘assembly rules’ that summarize why the map looks the way it does (Weatherby et al. 1998). But one can question whether such phenomenological rules actually explain the map, and in any case those ‘rules’ are (by design) specific to this one assembly map and so don’t comprise ‘general’ rules of community assembly.
I certainly think it’s possible to explain why some assembly maps are more complex than others, just as its possible to explain why some chess endgames are more complex than others (e.g., Steiner and Leibold 2004). Oikos has a strong history of publishing on this topic (e.g., Schreiber and Rittenhouse 2004,Fukami and Lee 2006, Didham and Norton 2006, Fox 2008). And I certainly think it’s possible to develop ‘assembly rules’ that explain why simple assembly maps look the way they do. Indeed, ecologists have already done this (think of Dave Tilman’s R* rule). But developing assembly rules that explain why complex, historically-contingent assembly sequences end up where they do? I’m not sure it’s possible, even in principle.