Friday links: how ecological research has (and hasn’t) changed in the last 30 years, and more

From Jeremy:

Writing in Plos One, Carmel et al. report on trends in the subject matter and methods of ecological research over the last 30 years. They sampled 25 articles/year from 1981-2010 from the ecological literature as a whole (136 journals), and another 25/year from eight leading journals published throughout that time period. Among the headline numbers: about 2/3 of all studies are of single species, and that fraction hasn’t changed much in 30 years. Studies of “climate change” and “biodiversity” have increased in frequency. In the literature as a whole, studies of genetics have increased in frequency while studies of physiology and behavior have declined in frequency. The most common method in both the literature as a whole and in leading journals is observational (!), with experiments a distant second, though the gap is smaller in leading journals. That surprises me. But this doesn’t: only 12% of studies in the literature as a whole and in leading journals are modeling studies, and that fraction has hardly changed since 1981. Meta-analyses are increasing in frequency in the literature as a whole, though not in leading journals, but still comprise only a small minority of all studies. And the frequency of studies focused on solving applied problems has increased substantially over time. I think it’s great to have data like this, it’s a reality check for our own biases and faulty memories. For instance, Lindenmayer and Likens’ recent complaints about ecology becoming dominated by math and meta-analysis are mostly baseless. (HT Don Schoolmaster, in the comments)

Philosopher Nancy Cartwright has an accessible piece in medical journal The Lancet on the virtues and limitations of randomized controlled trials as means of identifying the most effective medical treatments. Everything she has to say applies just as much to randomized controlled experiments in ecology. She begins with a very clear, lucid explanation for why randomized controlled experiments remain the gold standard for demonstrating causality, as compared to, e.g., methods like “instrumental variables” and structural equation modeling. She then goes on to discuss the challenge of generalizing from randomized controlled experiments. For instance, in ecology we might wonder if the experiment would come out differently if conducted at some other place or time or with some other species. The challenge is to gain knowledge of “causal capacities”: under what range of circumstances does treatment X cause effect Y? One way to address this issue is via brute inductive force: repeat the experiment under many different conditions. NutNet is an ecological example. But there may be other ways. Anyway, probably nothing super-new here, at least not to many of you, but it is an especially clear discussion. And it’s perhaps heartening (or depressing?) to know that we ecologists aren’t the only ones who struggle with the challenges Cartwright discusses. And just for fun, here’s Cartwright’s piece in cartoon form.

Terry McGlynn on how avoiding mathematical modeling helped him make an important discovery about ant life histories. I suspect I may surprise a few readers when I say I like Terry’s post very much. On this blog, I often argue for the value of mathematical modeling and try to clarify why it’s valuable. But I would never argue that all good science starts with modeling! Indeed, as I noted in the comments on Terry’s post, I and my labmates have done theory-free pattern-discovery of the sort Terry describes.

I thought about doing a whole post on this one, but chickened out and decided to bury it in the Friday linkfest to minimize the controversy: Economist John Whitehead notes an interview with the EiC of American Economic Review (AER; the leading journal in all of economics) in which the EiC admits that “prestige of the author” is a factor in deciding which papers to accept at AER. Which is something that non-prestigious people probably suspected, but it’s unusual to see it openly admitted. As to the relevance of this to ecology, I leave it to you to discuss as you see fit! Because I don’t dare (even though I have some relevant first hand information…) (HT Economist’s View)

Of course there’s a wiki devoted to explaining every xkcd comic. (HT Brad DeLong)

20 thoughts on “Friday links: how ecological research has (and hasn’t) changed in the last 30 years, and more

  1. The distinction between “observation” and “experiment” used in the Carmel et al. paper is very crude – to quote:

    “An article was classified as ‘experiment’ if an actual experiment was conducted in the laboratory, or if a field study included some sort of treatment or manipulation of the natural environment.”

    This would relegate all natural experiments, often done at a very large scale where formal experimental manipulation is not possible, to the “observational” category. For example, in a series of recent papers my research group compared a range of biodiversity measures on restored landfill sites paired with the nearest sites of nature conservation interest. I would see these as (semi-) natural experiments in that neither the control nor restored sites were specifically designed or manipulated for ecological study. But Carmel et al would presumably categorise them as observational.

    I would have liked to have seen a more nuanced assessment of categories of study.

    Jeff

    PS – If anyone’s interested, the outcome of the studies I mentioned were that by and large the restored landfill sites were at least as good, and often better, than the wildlife sites! See:

    Rahman, L. Md., Tarrant, S., McCollin, D. Ollerton, J. (2013) Plant communities and attributes of newly created grassland on restored landfill sites: a novel ecosystem with conservation potential? Journal for Nature Conservation in press

    Tarrant, S., Ollerton, J., Rahman, L. Md., Griffin, J. & McCollin, D. (2012) Grassland restoration on landfill sites in the East Midlands, UK: an evaluation of floral resources and pollinating insects. Restoration Ecology in press

    Rahman, L. Md., Tarrant, S., McCollin, D. & Ollerton, J. (2012) Influence of habitat quality, landscape structure and food resources on breeding skylark (Alauda arvensis) territory distribution on restored landfill sites. Landscape and Urban Planning 105: 281–287

    Rahman, L. Md., Tarrant, S., McCollin, D. and Ollerton, J. (2011) The conservation value of restored landfill sites in the East Midlands, UK for supporting bird communities. Biodiversity and Conservation 20: 1879-1893

    • Their definition of “experiment” is the usual one, and the right one. Comparative studies, including “natural experiments”, are importantly different than manipulative experiments. That’s true even if circumstances prevent manipulative experiments from being conducted. I suppose they could’ve subdivided their “observational” category more finely, but personally I don’t feel like that would’ve added a huge amount to the ms. I think you want a pretty coarse classification system for a project like this, so that the papers you’re reading can be classified quickly and consistently.

      • But “usual and right” according to whom? Is there really a concensus within the community of what constitutes an ecological experiment? A quick google search comes up with a range of definitions of “experiment” including this from Merriam-Webster:

        “a scientific procedure undertaken to make a discovery, test a hypothesis, or demonstrate a known fact”

        Now define “scientific procedure”…. 🙂

        Whilst I agree you need a coarse classification for this type of study, I still think this is too coarse.

      • If the experimenter is randomly assigning treatments to experimental units, it’s a manipulative experiment. If not, not. The Cartwright piece I linked to has a good summary of why this is a key distinction. And yes, there’s a consensus that this is what ”

        Not sure why you’re bringing up “scientific procedure”. That’s your term, not mine. Further, it is a phrase that nobody would claim has an agreed, precise definition. Perhaps that’s where the misunderstanding is coming in. My comments concern a narrow issue that I think is cut-and-dried: what’s a manipulative experiment, and what’s not. Your comments seem to concern a much broader and fuzzier issue: what’s a “scientific procedure” and what’s not. So I think we’re talking past one another here.

        Let me try to clear things up. In drawing a distinction between manipulative experiments and other sorts of studies, neither I nor (I’m sure) Carmel et al. intend to denigrate other sorts of scientific studies in any way. Their classification is intended merely to draw distinctions between two meaningfully-different scientific techniques (manipulative experiments, and observational studies), without making any claims about whether one technique is somehow “better” or “more scientific” than the other.

        As for whether the classification in Carmel et al. is too coarse, we’ll have to agree to disagree. They went to a lot of trouble to collect a lot of information, and the information they collected is sufficient to address the (numerous) questions they asked. I appreciate that you personally would’ve liked to see this additional information. I too would’ve liked to see additional information–but different additional information than you, because you and I happen to care about different things. But I’m not going to criticize them for not collecting information that I personally happen to care about. They’re not me, and I recognize that my own interests are, like everyone’s, somewhat idiosyncratic. In collecting the information they did, Carmel et al. made a perfectly reasonable stab at addressing some broad questions of broad interest to many people. I don’t think it’s fair to expect anything more of them.

      • I agree with Jeremy. And I’ve spent most of my career using “natural experiments” in threespine stickleback and guppy. Language matters and I think by using the phrase natural experiment, we comfort our brain into thinking that we’ve controlled confounding factors more than we really have. But the key difference is probably the random assignment of treatment levels not if the different levels were created by nature or by investigator control. So in this sense, natural experiments really aren’t at all experiments.

  2. Thanks for the link (yet again).

    The fetish over the label “experiment” gets annoying. If you’ve developed a question/hypotheses, and you design something to test it, that’s a kind of experiment. You have three flavors of experiment: observational, experimental, modeling. Lee Dyer (Univ. of Nevada, Reno) advocates this and I like it.

    You think observational and experimental are contradictory terms? Welcome to the majority I guess. And you can be the reviewer that forces me to change my paper, or recommends rejection on the basis of nomenclature. Again.

    • No, Terry, I’m not going to reject your paper because of nomenclature. And if I were to force you to change your nomenclature, it would only be because non-standard nomenclature, even if you define your usage explicitly, can make writing unnecessarily difficult to understand. And even in that case, if you gave me a good argument why you wanted to use non-standard terminology and could make the case that it wouldn’t cause confusion, then as an editor I’d probably let you do so.

      But I would reject your paper if you were to write about comparative data or “natural” experiments as if they were manipulative experiments with randomized assignment of treatments to experimental units. Which I’m sure you wouldn’t do, and I’m sure Jeff wouldn’t either.

      I’m not making a fetish over labels, I’m making a fetish over what I think is a key substantive difference between one sort of method (manipulative experiments, which we can call “JoeBobs” if you like), and another sort of method (observational studies, which we can call “the super-awesome method” if you like). Yes, absolutely, there’s more than one way evaluate our hypotheses. Which is precisely my point–those different ways of evaluating our hypotheses are *different* in important ways. A point with which I assume you agree, since otherwise why would you have distinguished three different ways of evaluating our hypotheses at all? And again (for at least the third time), acknowledging that different methods are in fact different does NOT imply anything *negative* about one method vs. another. Saying “A and B are different” does not imply “A is better than B”.

      I didn’t say “observational” and “experimental” were “contradictory” terms. You did. With respect, please don’t put words into my mouth, or lump me in with reviewers whose views on the relative merits of experimental vs. observational studies obviously have frustrated you. I think I have made my own views clear. With respect, please do not attribute to me views I do not hold, based on the fact that I prefer to use the same terminology as other people who hold views different than mine.

      As I’m sure is evident, I’m finding this discussion increasingly frustrating. I think that’s a symptom of an unproductive discussion, or perhaps a discussion that was once productive but is no longer. And I’m not sure if there’s anything else I can do on my end to make the discussion productive again. I feel like I’ve been as clear as I can be, and I confess I’m surprised that further clarification seems necessary, as I hadn’t thought I was saying anything controversial, or even non-obvious. But clearly, my efforts to clarify aren’t having their intended effect. So rather than continue to annoy you and Jeff, two of our most active and thoughtful commenters, I’m just to going step aside from this thread. You’re of course welcome to comment further if you feel moved to do so, but I’m unlikely to reply. Hope there are no hard feelings (certainly, there are none on my end).

      • I’m sorry that I was unclear. When I said “you think” I did not intend to specifically to refer to you, Jeremy. I was referring to anybody reading who agreed with that statement. It was a failed attempt at being pithy. My apologies.

      • As usual, we agree on pretty much everything. I was mostly addressing folks reading this that do have a clear anti-observational work bias, which you clearly do not. There are lots of people who think that observational and experimental are antonyms to one another. Of course the discussion needs to stay productive.

      • Jeremy – why do you get the impression I was annoyed?! I’m not at all (there was a smiley face in my last post!) – I thought we were just discussing the meaning of “experimental” in the context of the paper by Carmel et al. I’ll reply to your other comment above in a while. But perhaps one of the reasons why Terry and I sensed a negative feeling from you with regards to “observational” study was due to the exclamation mark you added in the middle of this sentence:

        “The most common method in both the literature as a whole and in leading journals is observational (!), with experiments a distant second”

        That’s only a surprise if one has a very narrow view of what an “experiment” actually is. Which is where I joined the discussion!

      • Ah. You misunderstood the exclamation mark. It indicates my surprise that observational studies are much more frequent than experiments. That I found that surprising does not indicate that I have an over-narrow view of what an experiment is. It indicates that my own offhand impression of what sort of papers get published these days is very biased, presumably because I don’t read a random sample of the literature.

        I was annoyed because I misunderstood Terry’s comment as being aimed at me, and so mistakenly thought he was disagreeing with me. I’m good now.

  3. Richard Brandon has a nice paper on two meanings of experiment in “Theory and Experiment in Evolutionary Biology”. He has a nice 2 x 2 table, like orthogonal factors in a 2-way ANOVA. One factor is manipulation v. non-manipulation. The other factor is hypothesis testing (confirmatory) v. non-hypothesis testing (exploratory). It’s interesting to think about about different labs in a particular field address questions using methods within the different boxes. In my field there is yet another meaning of experiment. If you measure something with expensive equipment, say fluid flow using a laser sheet and high speed digital cameras, then that’s an experiment. If you used a ruler, then it’s not an experiment.

  4. Turns out I can’t reply directly under your last comment to me, Jeremy, so I’ll post it here. As a couple of people mention above, there are different definitions of what constitutes an “experiment”, both in ecology and in other fields. Carmel et al. chose a very specific definition of “experiment” and that’s fine. But they then use this to assess the state of ecology as a science and use it as evidence that ecology is a “discipline which is considerably less dynamic than ecologists would like to believe”.

    But if they had chosen a more nuanced categorisation I think they’d have come to a different conclusion, To give just one example: thanks to the advantages of rapid modern communication systems we’re now able to coordinate studies at different sites using different teams of investigators across large geographical areas, something that was very rare 30 years ago. Such studies are clearly not “experimental” in this narrowly defined way, but neither are they “observational” in the sense that studies of single sites 30 years ago would be defined. But according to Carmel et al.’s criteria, nothing has changed over 30 years and that’s patently wrong.

    I’m not trying to put down Carmel et al.’s study, clearly they put a lot of effort into the work as you say. But they found what they found because of coarse categorisation that I don’t think accurately reflects the field.

    • In saying that Ecology is “non-dynamic”, Carmel et al. are merely noting that things haven’t changed that much in 30 years. They’re not lamenting that “Oh, if only ecologists would quit doing all these observational studies and do more experiments. And many ecologists, including me, thought things had changed a lot in the last 30 years, even at the very broad-brush level Carmel et al. examined. I stand corrected on that.

  5. I think I’m flogging a dead horse here but ecology clearly has changed ENORMOUSLY in the last 30 years with the advent of modern IT, molecular biology, etc. I went out for a long drive this morning and thought about this quite a bit: and the conclusion I came to is that the scale at which Carmel et al. have chosen to address the question of “is ecology dynamic” is simply wrong. Take another field as an example: would you say particle physics is not “dynamic”? Yet I bet if the Carmel et al. approach was taken to particle physics over the past 30 years that would also appear to be static in relation to the proportion of papers that were observational, experimental, theoretical, etc.

  6. Hi Jeremy,

    I enjoyed the paper by Cartwright in the Lancet that you linked to. A very nice exposition related to both causal inference and “effectiveness”, which can be perhaps translated to “portability” or “extrapolability”. She does a nice job of articulating the deeper issues behind causal inference, whether from Randomized Clincal Trials (RCTs) or from other approaches. I did wonder after reading the article, however, why you mentioned structural equation modeling, almost as if she had mentioned that method in comparison to RCTs. I did not find such a mention in a paper, and if I did, I would be quick to point out that SEM can be applied to experimental data.

    Perhaps I misinterpreted something.

    Keep up the stimulating posts!

    Jim Grace

    • Hi Jim,

      Perhaps should’ve been a bit more specific, I meant structural equation modeling as applied to data not coming from randomized experiments. Should perhaps also have been clearer that it’s me, not Cartwright, contrasting instrumental variables and SEMs with the “gold standard” of randomized experiments as a tool for causal inference. This was a perhaps too-brief attempt on my part to help make clear the relevance of her piece to ecologists.

      Thanks for reading, glad you’re liking the blog!

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.