Quantifying the life histories of ecological ideas (UPDATED)

The collective interests of ecologists change over time. Most obviously, the field is much more focused on global change and statistical methodology than it once was. The temporal dynamics of those changes sometimes have a self-reinforcing character, as ecologists get interested in some new line of research in part because other ecologists are interested in it (a bandwagon). On the other hand, ecologists’ collective interests sometimes are quite resistant to change, as illustrated by zombie ideas: ideas that continue to be widely taught and cited even though they were refuted long ago. But can we go beyond anecdotal examples and tell a data-based story about how ecologists’ collective interests change over time? How quickly does collective interest in an ecological idea typically build? How quickly does it typically decay? And can we quantify the atypical cases–Buddy Holly ideas, say, or Sleeping Beauties?

We might also wonder if the way in which ecologists’ collective interests change over time has itself changed over time, as more and more ecologists publish more and more papers. A recent unreviewed preprint argues that high publication volume leads to scholarly inertia. An ongoing flood of new papers might make it harder for any single new idea to emerge from the background noise and garner a significant fraction of ecologists’ collective attention. Instead, the competition for attention among a flood of new papers might cause the field to fragment (see also).

To get a bit of data on the life histories of ecological ideas, I turned to Web of Science and looked up the annual citation counts for a bunch of highly cited ecology papers, from old (Hairston, Smith, and Slobodkin 1960, “the world is green”) to new (Estes et al. 2011, “trophic downgrading of planet Earth”). For instance, here are the annual citation counts for Connell 1978 (“intermediate disturbance hypothesis”, IDH) and Estes et al. 2011:

Slide1

It’s tempting to interpret that figure as showing that interest in “trophic downgrading” exploded rapidly, whereas interest in the IDH has been building gradually for decades, except during the late 80s when interest temporarily went on the wane. But that would be wrong, because you have to allow for the fact that many more papers are published these days, and all those papers cite other papers. So if you’re measuring “collective attention” as “annual citation counts”, well, there’s a lot more collective attention to go around these days. So we need to correct for that, by scaling citation counts relative to the total number of ecology papers published each year.

It’s hard to precisely define the universe of “ecology papers”. But here’s a crude first pass: the number of WoS-indexed papers published each year on the topic “ecol*”:

Slide2

There’s a weird step increase in 1991 which I presume has something to do with the completeness of WoS records (UPDATE: I just learned the weird step increase is because WoS abstract records start in 1990. And when you search WoS on a “topic”, what you’re really doing is searching paper titles and abstracts for the search term. There are many ecology papers that have “ecol*” in the abstract but not the title, hence the big step increase in 1991. Now that I know this, if I was going to go back and redo the entire post I’d probably either chuck the pre-1991 data or figure out a totally different way to estimate the number of ecology papers.). Because there’s no way the annual number of ecology papers more than doubled in 1991! Which is a problem, because if you scale the annual citation counts of individual papers by the annual number of newly-published ecology papers, you end up concluding that ecologists’ collective interest in every paper published before 1991 suddenly cratered in 1991. An exponential smooth fits the data reasonably well, so let’s use the smoothed annual paper count as an index of the number of ecology papers published each year. This still isn’t perfect, for all sorts of obvious reasons (e.g., some ecology papers get cited outside ecology). But it should be close enough for government work a blog post. In particular, as long as growth of the total number of ecology papers published each year is roughly exponential, most of my results should hold, even if we’re badly over- or underestimating the precise growth rate, and/or over- or underestimating the number of ecology papers published in the first year of the dataset (1960). (But I could be wrong–please tell me in the comments if you think I am!)

So, here is a graph of the scaled annual citation counts of a bunch of highly-cited ecology papers (see the end of the post for a list of the papers). Each colored line is the time series of scaled citation counts for a different paper:

Slide3

The thing that should jump out at you is that back the ’70s and ’80s people dressed like this giants strode the earth, citation-wise. Relative to how many papers were being published at the time, the most-cited papers in the ’70s and ’80s were cited much more often than are the most-cited papers published recently. That is, back in the 70s and 80s, one paper could attract a much larger fraction of ecologists’ collective attention (as measured by citations) than it is possible to attract today. Heck, these days, the peak level of collective attention given to recently-published, high-profile papers is no higher than the residual, much-decayed attention going to classic papers published many decades ago! Which is exactly what you’d expect if the increasing volume of publications, and/or increasing specialization of ecologists, is making it harder for any one paper to break through and claim a substantial fraction of ecologists’ collective attention. I wouldn’t put much stock in the exact numbers here; I doubt 6% of all ecology papers published in 1977 cited Pianka 1970 (the highest peak on the above graph). But I’m fairly confident the general pattern is robust to the limitations and imperfections of the data, and to my debatable analytical choices.

Question: is it also the case that one idea (as opposed to one paper) could attract more of ecologists’ collective attention back in the 70s and 80s than is possible today? That’s debatable–you’d somehow have to define the boundaries of different ideas–but I suspect the answer is yes. What do you think?

It’s hard to see much else in that figure, so here are the same data plotted differently. I’ve plotted annual scaled citation counts as a function of number of years post-publication. And I’ve arbitrarily split the papers into two groups: those published before 1985 (top panel), and those published after that (bottom panel). Note that the x-axis scales vary between panels but the y-axis scales are the same.

Slide4

Slide5

Now you can see a few more interesting features of the data:

  • Ecologists’ collective interest in highly-cited papers typically peaks 5-10 years post-publication. I think that’s actually a pretty good estimate of the typical timescale of ecological bandwagons or research programs more generally–collective interest peaks 5-10 years after the bandwagon/program gets rolling. There are exceptions, discussed further below.
  • Collective interest in a paper builds much faster than it decays. Presumably that’s in part because there’s often a self-reinforcing element to growth of collective interest in an idea, but not to loss of collective interest in an idea. But I’m sure there are other reasons too.
  • The higher the peak of collective attention, the faster post-peak attention decays. I don’t know why–got any ideas? But because the decay starts from a higher peak, papers that garner higher peak levels of collective attention retain more of the field’s collective attention even several decades later.
  • The rate of decay in post-peak attention slows over time. With the consequence that ecology papers that grab an appreciable fraction of the field’s collective attention almost never die. I have yet to find an ecology paper published since 1960 that once garnered really substantial attention, but now garners no attention or only negligible attention. Ok, I’m sure there must be a few. But they’re rare. (So to IDH proponents worried that my criticisms of the IDH will cause the whole idea to be abandoned (“throwing the baby out with the bathwater”), you can stop worrying. 🙂 Because what you’re worrying about–a once-prominent ecological idea being completely abandoned–basically never happens.) I leave it to you to decide if this is a symptom of the cumulative progress of ecology, or a sign of ecologists’ collective lack of what Brian’s called a “problem solving mentality”. Do we identify good ideas and then hold onto them forever, as a cumulatively-progressive field should? Or just identify ideas and then hold onto them forever?
  • It’s rare but not unheard of for papers to experience a post-peak revival of interest. Examples below.
  • Peak heights are continuing to drop. The two highest-peaked life histories in the bottom panel belong to the two oldest papers in that panel: Pulliam 1988 (source-sink dynamics) and Levins 1992 (pattern and scale).

Next, I’ll plot a representative time series along with a few unusual ones, to give you a sense of the range of possible “life histories”. These examples also will hopefully convince you that my summaries of the life histories of individual papers are, despite their crudity, capturing something real about ecologists’ collective attention to important ideas.

Here are two typical life histories, a high-peaked one (May 1976, chaos, upper blue line), and a lower-peaked one (Rosenzweig and MacArthur 1963, paradox of enrichment, lower orange line):

Slide6

Both exhibit a clear peak of collective interest 7-8 years post-publication, followed by an exponential-ish decay. That exponential-ish decay arises because citations to those papers are only growing fairly slowly and roughly linearly over time, and so not keeping up with the exponential growth of the ecological literature as a whole.

Next I’ll show you some atypical life histories.

Hairston, Smith & Slobodkin 1960 experienced a broad (and noisy) peak of collective interest for the first 15 years after it was published. And collective interest in it remained pretty high through the mid-90s (the time of the “top down vs. bottom up” wars), before it began dying away.

Slide7

Peak interest in Paine 1966 (keystone predation) lasted almost a decade, from the mid-70s through the early 80s, and there was a brief revival of interest in the late 90s:

Slide8

MacArthur and Levins 1967 (limiting similarity) experienced a slowly building, noisy extended peak of interest up until 1980. It’s also unusual in having experienced an extended revival in interest, starting in the mid-oughts. I presume that’s because both the phylogenetic community ecology bandwagon and the trait-based community ecology bandwagon, both of which kicked off in the mid-oughts, were based in large part on the assumption of limiting similarity. Unfortunately. Come back, Peter Abrams, your work is not done!

Slide9

May 1972 (stability-complexity) has experienced a bit of a revival in collective interest lately. I presume because many ecologists are into diversity-stability relationships these days and cite May 1972 when they review the history of diversity-stability research in the introduction sections of their papers:

Slide10

But if you want to see a really strong revival in collective interest–two of them, in fact!–you need to look at the life history of Hill 1973 (“Hill numbers” and diversity indices):

Slide11

Clearly, ecologists’ collective interest in measurement of biodiversity is cyclic with a 20 year period. 😉 I predict that collective interest in Hill 1973 will start to tail off this year, but will revive in 2039. 😉

I found one other idea that experienced a decline in collective interest followed by a subsequent revival: the IDH. Here are the life histories of Connell 1978 (blue upper line) and Huston 1979 (orange lower line):

Slide12

Without implying any personal criticism of anyone, I confess I find it rather depressing that two of the most seriously-criticized ideas in ecology, theoretically and empirically–the IDH and limiting similarity–are two of the most resilient.

Finally, a paper in which collective interest built to a peak over an unusually long period of time: Chesson 2000 (modern coexistence theory):

Slide13

And of course, the modern coexistence theory that Chesson 2000 did so much to popularize actually predates that paper by several years. So interest in the idea, as opposed to the specific paper, took over 20 years to build to a peak. Speaking as a fan of modern coexistence theory, all I can say is, better late than never! 🙂 Greedily, I wish that collective interest in modern coexistence theory would continue to grow, but it looks like it peaked a couple of years ago.

A final thought: the big things these data miss are how and where these papers are getting cited. For instance, in an old post I argued that the IDH is no longer a zombie idea but a “ghost”, based on how and where it’s being cited. These days, Connell 1978 often is cited only in passing, by papers in obscure venues that aren’t even about the IDH.

So, what do you think? Does anything here surprise you? Cheer you? Dismay you? Looking forward to your comments, as always.

Papers analyzed:

HSS 1960, Hutchinson 1961, Rosenzweig and MacArthur 1963, Paine 1966, MacArthur & Levins 1967, Pianka 1970, Hurlbert 1971, May 1972, Hill 1973, May 1976, Holt 1977, Grime 1977, Tilman 1977, Connell & Slayter 1977, Connell 1978, Huston 1979, Hurlbert 1984, Pulliam 1988, Levin 1992, Tilman et al. 1996, West et al. 1997, Hanski 1998, Hector et a. 1999, Chesson 2000, Loreau and Hector 2001, Brown et al. 2004, Hooper et al. 2005, Halpern et al. 2008, Bolker et al. 2009, Estes et al. 2011

19 thoughts on “Quantifying the life histories of ecological ideas (UPDATED)

  1. Wow, lots to digest here….. But regarding the step change in 1991, I’m not convinced it is due to completeness of WoS records – I saw the same thing, but bigger, when I looked at the pollination literature:

    https://jeffollerton.wordpress.com/2013/06/17/the-cliff/

    It could be due to more journals being published from 1991 onwards. However, since then I’ve done the same analysis but restricted it to just 8 ecological journals and I see EXACTLY the same step change in 1991. I’ve not put that on the blog yet as I’m using in in my book, but I can send you a copy if you’re interested.

    • Huh, interesting. It should be easy enough to check if some journals either started publishing in 1991 or greatly increased the number of issues or number of articles per issue in 1991.

  2. Would there be an interest in actually extending this and writing it up as a review or opinion paper? Possibly including poll data? (Caveat: I have not yet published a paper containing any empirical data, so my knowledge of how easy/difficult these things are to do is pretty low). I think applying tools (such as life history analysis) to the field itself is actually quite a valuable thing, and a number of the trends you highlight seem interesting and important (and others which seem to be expected).

    • Thanks for the suggestion, but realistically I won’t have time to do that any time soon, sorry. But drop me a line if you want the data I already compiled and want to play around with it with a view towards writing something up.

  3. Via email, a correspondent asks if perhaps some of these papers are no longer much cited because they’ve become “common knowledge” within the field. To which, good question, but I think the answer’s no. When I think of ideas or techniques that are now considered to be well-known among all ecologists, so no citation is needed, I think of things like ANOVA or the definition of evolution by natural selection. Not, say, r/K selection or the IDH or whatever.

    • I suspect that lack’s theory of clutch size is one of these. it is based on papers published 70 yrs ago, and they are not well cited today because the idea is well accepted.

  4. Nice analysis, and thanks for pointing out the value of plotting relative v absolute citation.

    I’m fascinated by how Ecologists actually cite the classic papers. Based on an analysis of Paine 1966 (AmNat), Tom Suchanek and published a couple year ago, statements associated with this citation classic were most often trivial fodder for an Introduction, often without (apparently) reading the paper, and surprisingly often for what one could argue is opposite to what happens in the intertidal.

  5. Really interesting food for thought, Jeremy. A couple thoughts:

    (1) There seems to be a general preference “out there” for recent vs. older references, perhaps to give the appearance of working on something “new”. So one factor (among others) explaining declines might be people switching to citing more recent reviews/syntheses, even if the old one would do the job to support statements like “Topic/phenomenon X exists and is interesting”. That would mean it’s not a sign of waning interest in a topic.

    (2) Has ecological research diversified, not just via specialization (focusing on a narrower portion of what’s under the umbrella of ecology) but also via expansion (broadening the scope of what counts as ecol*)? If so, the proportion of ecol* papers that even _could_ reasonably cite a given other ecol* paper (i.e., there is some relevant connection) might be declining as well.

    • Yes, both of those are definitely possible.

      Somewhat related to your #2: I worry that I have no idea how WoS topic fields are assigned to papers. I know from my old attempts to identify the most-cited ecology papers that the WoS “ecol*” topic omits some highly-cited papers that are definitely ecology papers even for a narrow definition of the field of ecology. Weirdness in how WoS classifies “ecol*” papers is the potential artifact that worries me most here.

      • As far as I can tell WoS does not do any classification of its own, topic searches just pick up words in the abstract, title, etc. So if an old ecology paper does not mention “ecol*” in its title or key words, and there is no abstract, it won’t be returned in a topic search.

  6. I’ve read this a few times now and I keep coming back to the phrase “ecologists’ collective interests”. The way the analysis is framed suggests that “back in the day” ecologists tended to be a homogenous group who read and cited much the same literature, and that that’s changed over time as ecologists have become more specialised/ecology has become more fragmented (take your choice). But was that ever really the case? Were all (or even the majority) of ecologists in the 1960s & 70s who were reading about, for example, predator-prey relationships also reading about species coexistence? By scaling the citations by the total number of papers published per year the analysis assumes that they were.

    I think it’s fair to argue that ecology has never been that homogenous so a more appropriate scaling (though still a bit crude) would be to use the number of papers published per year within an appropriate sub-field of ecology. So for instance scale MacArthur & Levins 1967 by WoS papers on “community ecol*”.

    The other problem with just scaling by “ecol*” is that the use of the term has diversified into other fields in recent time, particularly fields like molecular biology, engineering, business, computing, cultural studies, etc. You can get a good sense of this by using the Treemap visualisation when you analyse the results in WoS. The scale of this is hard to judge but looking at Treemap, it could be as high as 20% of recent papers have nothing to do with “ecology” as we understand it.

    • ” By scaling the citations by the total number of papers published per year the analysis assumes that they were. ”

      You raise a good point that is tricky to address. Subfields aren’t any more well-defined than the field of ecology as a whole is.

      Your point also can be read as shifting the question from ideas to subfields. Instead of comparing life histories of individual papers, we could compare the life histories of subfields.

  7. Interesting, but I am a little skeptical about the interpretations offered. In the history of ideas this type of analysis would be done using graphs of the citation network. The citation numbers here are an outcome of social processes (invention, communication and access to ideas). So, the representation/analysis needs to account for how social groups form and interact. For example, the plots in the post don’t indicate anything about the individuals and groups of individuals who are citing these papers. The peak(s) corresponds to what group of authors? Are they all from the same campus, or “school” (that is coming out of the same lab) or are they really spread across the entire field of ecology (including aquatic, marine and terrestrial …) and outside of North America ?

    It’s a fascinating question and for anyone who is interested, there are many free tools available to dig into the history of ideas through citation networks – http://www.citnetexplorer.nl/ and http://www.vosviewer.com/ come to mind, and Cytoscape can also be used to do something similar.

    • “The peak(s) corresponds to what group of authors? Are they all from the same campus, or “school” (that is coming out of the same lab) or are they really spread across the entire field of ecology (including aquatic, marine and terrestrial …) and outside of North America ?”

      For the papers analyzed in the post, the peaks are far too high to just reflect citations from one lab, one campus, one “school of thought”, etc. But yes, as Jeff Ollerton notes in another recent comment on this thread, “ecology” is not a homgeneous field and so it’s only papers from certain fuzzily-defined subfields that might potentially cite HSS 1960 (or whatever). That’s actually why I’m reasonably confident that citations of some of these papers track collective interest in the ideas and even subfields associated with those papers.

  8. We, us life history theorists, can have some fun here, and I propose the ‘Fox law’ for elapsed time till maximum relative impact of a publication in ecology. I ignore all the questions/objections raised above and simplify the problem. As a theorist should.

    Suppose the actual citation count for a paper /book increases with time since publication(T) to the B power . So citations are proportional to T{ raised to B power}.
    If the total number of publications in ecology is an exponential with exponent C [ about 0.1 in above data ], then we can solve for the elapsed time T* till the citation-count/total-paper-count is at a maximum:

    Easy to show that T*= B/C. ……… so the time (T*) till maximum is just related to the exponents.

    (to actually do the optimization is a homework problem.)

    Ric

    • I was waiting for someone to show that all these “life history” curves collapse to a single curve under appropriate rescaling. Since I was too lazy to do it myself. 🙂 You’re part of the way there, keep going! 🙂

  9. Fascinating collection of histories. I wonder if a couple of things are in operation, though. First, there is a replacement process. Paine didn’t stop with his 1966 paper, and at some point, later papers can cast a “shadow” that reduces citation of earlier ones. Books can do that even more strongly. My 2001 book on matrix population models, for examples, provides a convenient way to cite things that originated in papers going back to the 1970s. Connected with this is some confusion among (all of us?) in the purposes of citation. I think of those purposes as including (1) acknowledging the origin of the theory/method/result, (2) providing helpful information on how to do something, (3) providing examples of the recent use of an idea. Ecologists are not very concerned with acknowledging priority, and often use citations of type (3) instead. That helps to drive the decline in citations of papers once they become classics.

    • Yes, there could be something to the later-papers-and-books-cast-a-shadow hypothesis. Nobody cites Hubbell’s 1997 Coral Reefs paper, not just because Coral Reefs is a specialized journal, but because everybody cites Hubbell’s 2001 book instead. Not sure how to check this though. Because I think in most cases, the “shadow” is a cumulative thing cast by many subsequent papers. I wouldn’t expect to see, say, a big drop in citations of Pianka 1970 thanks to Steve Stearns’ life history theory book, or a big drop in citations of HSS 1960 thanks to Oksanen et al. 1981. Maybe the way to get at this would be to compare citations of (say) Paine 1966 to counts of papers with “keystone predation” in the title or abstract.

      Good point that ecologists mostly don’t care about acknowledging priority, and mostly just cite recent uses of an idea. That’s spot on. I wonder if these “life history” plots would look different in a field with different citation practices? Maybe paleontology, where they care very much about acknowledging the origin of an idea/method.

  10. Pingback: Pick and Mix 31 –  questions and answers | Don't Forget the Roundabouts

Leave a Comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.