Why don’t meta-analyses in ecology often lead to subsequent theoretical insight? (or, why doesn’t ecology have more “stylized facts”?)

Here’s a cartoon sketch of how I think a lot of empirical research in ecology proceeds:

  1. Many ecologists get interested in some phenomenon that occurs or might occur in many different systems. Interspecific competition. Trophic cascades. Keystone predation. Ecosystem engineering. Curvilinear local-regional richness relationships. Whatever.
  2. Nobody really has much theoretical idea of how common or important the phenomenon might be, or what factors might influence its occurrence or strength. Maybe we have some vague verbal hypotheses, such as that the phenomenon might be stronger in the tropics or something.
  3. Many ecologists go out and do field studies to test for the phenomenon or its effects. They do lots of competitor removal experiments, or document lots of local-regional richness relationships, or whatever. The hope is that once we have some data to go on, empirical patterns will emerge and those patterns can then guide future theoretical and empirical work. Give theory a target to shoot at, as it were.
  4. Then somebody does a meta-analysis or other quantitative summary of all those studies, looking at both the overall average strength of the phenomenon, and at covariates that might be associated with variability around the overall average. Usually, those covariates are readily-available, “coarse” variables like the biome in which the study was conducted, the broad taxonomic group of the key species involved in the study, the latitude at which the study was conducted, etc. Usually, the headline result of that meta-analysis is that the phenomenon is common and strong on average. But the studied covariates either have no significant effect on the occurrence or strength of the phenomenon, or explain little of the variation in its occurrence or strength.
  5. And…that’s it. The headline results gets added to ecologists’ body of empirical knowledge. The effects of the covariates mostly get ignored, since after all they’re weak/noisy/non-existent. But beyond that the meta-analysis doesn’t inspire or guide any new theoretical modeling, or lead to any big new insights (even though the meta-analysis’ authors often say or hope that it will). Maybe the meta-analysis doesn’t even lead to any new work at all. It functions not as a jumping-off point for further work on the topic, but as an endpoint. This topic’s played out. We’ve got the answer, ecologists seem to say. Or at least, as much of an answer as we can get expect to get easily. The low-hanging fruit’s been picked; time to move on to the next thing.

Here are some questions for you about this little cartoon sketch:

  1. Have I got it about right? If not, where am I way off base?
  2. Assuming that I’ve got this about right, could ecologists do better? If so, how?
  3. Following on from the previous question: what are the biggest and best exceptions to my little sketch? I’m particularly interested in exceptions to step 5: examples of ecologists going out and studying some phenomenon in a range of systems, with the resulting meta-analysis or other statistical summary inspiring productive theoretical modeling (that ideally then suggests or guides further empirical work). Rather than the meta-analysis ending up serving as an endpoint for the research program. The biggest and best example that comes to my mind is 1/4 power allometric scaling. Can you think of others?
  4. Lurking under the surface here is a bigger, broader question: what are the necessary ingredients for “pattern first” empirical research to lead to progress that goes beyond a statistical summary of the pattern? Not that the statistical summary doesn’t itself represent progress—it does! But what does it take to go further? We all say that good science ideally consists of ongoing, productive feedback between theory and data (don’t we?) Are there any broadly-applicable “rules of thumb” for what sorts of data provide the best starting point for productive data-theory feedback? Can anything be said in general about what sort of data give theory good “targets to shoot at”?*

*In economics, there’s a literature on this. Empirical data that are particularly good fuel for theory are known as “stylized facts“. A stylized fact is a simplified summary of an empirical finding, such as “education substantially increases lifetime income”. The term is originally due to Kaldor (1961), who identified several now-canonical stylized economic facts. Another classic reference on “stylized facts” and why they’re useful for fueling further theoretical (and thus empirical) work is Summers (1991). Notably, Summers accused empirical economists of the time of spending too much effort using sophisticated statistical techniques to precisely estimate model parameters and tease out non-obvious causal relationships in empirical data, activities which Summers argued were much less useful than producing stylized facts. Abad and Khalifa (2015) provide a recent philosophical review and analysis of the notion of “stylized facts”. Here’s one randomly-googled example of a theoretician proposing a model to explain some stylized economic facts; the model in turn predicts other stylized facts for which empiricists could look. I think ecology has stylized facts, but offhand I’m not sure it has enough of them (you can’t have too many!), or that it’s good enough at producing them. The cartoon sketch above is a proposed stylized fact about ecology’s paucity of stylized facts.**

**Too meta?***

***You can never be too meta!

33 thoughts on “Why don’t meta-analyses in ecology often lead to subsequent theoretical insight? (or, why doesn’t ecology have more “stylized facts”?)

  1. My initial thought is that the title of this could/would be better phrased as “_Should_ meta-analyses in ecology lead to subsequent theoretical insight?” Because it’s not clear to me why we should expect theoretical (as opposed to empirical) insights to follow. Take meta-analyses of health interventions: the insights from these are about whether or not a particular intervention does what it’s supposed to do. Either way, we have an insight (it works or it doesn’t). In economics, have theoretical advances been especially useful in understanding macroeconomic patterns? My impression is that they have not.

    • Fair enough. All I can say is that meta-analysis authors often say things like the following (it’s the end of the abstract a randomly-googled recent meta-analysis of effects of ecosystem engineers on species richness):

      “This study is the first attempt to build
      an integrative framework of engineering effects on species diversity; it highlights the importance of considering latitude,
      habitat, engineering functional group, taxon and persistence of their effects in future theoretical and empirical studies.”

      But I suppose you could argue that that sort of thing is just boilerplate and shouldn’t be taken too seriously.

      As for whether stylized facts, and the theorizing they’ve inspired, have been useful in economics, I’d say a couple of things. One, if you want to push back against this post by arguing “who cares if meta-analyses are good at fueling theory, because most theory is useless anyway”, well, ok, but that’s getting into a huge and very different discussion! (Plus, you’ve invited the obvious retort “the reason most theory is useless is because it doesn’t have enough stylized facts to work with.” 🙂 ). Second, my admittedly-cursory understanding of the economics literature (including micro as well as macroeconomics) is that stylized facts and the theoretical work they inspire actually are hugely important for progress. The theoretical work in macroeconomics that’s most open to criticism for being useless is forecasting models like DSGE models, which aren’t really inspired by or aiming to explain or predict stylized facts.

      • I’ve just clicked through to the Wikipedia entry on “stylized facts” – it wasn’t a term I was familiar with – and it raises the ancillary question of what are the stylized facts in ecology? Would make an interesting follow up post. Here’s some to start off with:

        “All communities undergo succession at some time scale”

        “All species interact with other species at some point in their life cycle”

        “Species diversity peaks at intermediate levels of disturbance” [lights Jeremy’s blue touch paper then runs away fast…. 🙂 ]

      • @Jeff:

        “it raises the ancillary question of what are the stylized facts in ecology? Would make an interesting follow up post. ”

        Way ahead of you. Started drafting the follow-up last night. 🙂 No gazumping me! 🙂

        Your second suggestion seems to me to be too broad or obvious to count as a useful stylized fact. It’s a bit like saying “all species are made of matter”–true, but sufficiently obvious that it doesn’t demand a theoretical explanation.

        Your first suggestion probably counts as a stylized fact, but perhaps is too nonspecific to be a very useful one.

        Your third suggestion would be an excellent stylized fact if in fact it were a fact. That’s one of the challenges with stylized facts. “Stylized” means “has exceptions; only true if you squint a bit”. But at some point, if you have to stylize a fact too much it ceases to be a fact.

        Your third suggestion also raises a question: at what point does a troll of someone become so obvious it ceases to be effective as trolling, because the target saw it coming from 10 miles away and so took it with equanimity? 🙂

      • To quote Wikipedia:

        “Already in an early response Solow pinpointed a possible problem of stylized facts, by stating that there “is no doubt that they are stylized, though it is possible to question whether they are facts.””

      • @Jeff:

        A stylized fact about my blogging is that it’s all just rehashes of stuff I learn from reading economics blogs.

        I leave it to you to decide if that stylized fact is too stylized to count as a fact. 🙂

  2. Interesting… I think we might want to at least consider the ‘null’ hypothesis that Ecology might be too complex and ‘soft’ to actually have many unifying phenomena (i.e. a particular process resulting in a particular pattern across multiple systems)

  3. I think many meta-analyses in ecology are really more questions of relative importance (amongst a world of multicauslity). Certainly the original competition meta-analyses were basically about “how common and important is competition” – there was already a well-developed theory of competition.

    A place where I think meta-analyses have been used as a test is top-down vs bottom-up control.

    A place where I hope meta-analyses prompts theory (although it has mostly prompted hand-wringing to date) is the idea that trophic-cascades are more common in marine/aquatic systems than terrestrial systems. If true this demands a good theoretical explanation.

    And a quick look at the Caldor paper – it strikes me that every one of those stylized facts was what I would call comparative or even macroecological – they were mostly “productivity always goes up”, “but the upward trend is variable in nature across space/nations”, “capital to output ratio is constant over space and time” etc. Nearly all the meta-analyses you are talking about take a bunch of one-off (one site, one time) studies and try to determine generality about the one-off context (competition usually matters). Interestingly the trophic-cascade example I listed above would qualify as one of these comparative or macroecological analyses.

    Maybe you should stop thinking about metanalyses and think about macroecology! My secret program to convert you to a macroecologist through blogging is half-way complete. *evil grin*

    • “I think many meta-analyses in ecology are really more questions of relative importance (amongst a world of multicauslity). ”

      Agree. So you could rephrase my post as asking “are there any useful stylized facts about relative importance of variables in a multicausal world?” With the follow-up question being “If not, does that mean that ecologists maybe should all quit caring so much about questions concerning the relative importance of variables?

      “And a quick look at the Caldor paper – it strikes me that every one of those stylized facts was what I would call comparative or even macroecological”

      Agreed. Many (all?) stylized facts in ecology are comparative or macroecological. Though I do think it’s important to recognize that not all the comparative ones concern the sorts of comparisons that often get lumped under the heading of “macroecology”. For instance, my favorite stylized fact in ecology is from Murdoch et al. 2002 Nature: generalist consumers that exhibit population cycles always exhibit low-amplitude cycles with short periods, whereas specialist consumers that cycle always exhibit high-amplitude cycles with long periods (you can actually state it more precisely than that, but that’s the gist). A comparative stylized fact, yes–but not one that would ordinarily be called macroecological, I don’t think. It’s a stylized fact about population ecology.

      Off the top of my head, other stylized facts that aren’t macreoecological include several from food web ecology. “Food web connectance ranges from about 0.03 to 0.3)”. “Food web connectance declines on average with species richness because high-connectance, high-richness webs don’t exist”. “All food webs fall into one of two structural classes defined by their relative frequencies of different ‘modules’. Roughly, these classes are webs with fairly well-defined trophic levels and little omnivory, and webs with ill-defined trophic levels and lots of omnivory.” And “tropic cascades are ubiquitous, but attenuate as they propagate to lower trophic levels”.

      • Agreed – not all stylized facts are macroecological in scale or variables of interest, but they might all be comparative at either community or macro scales.

        I argued in my IBS talk that macroecology, at least by its founders, was defined as not just large scales but methodological in that it emphasied “blurring the lens” or “squinting” or approaching statistical mechanical or aiming for generality (different founders used different words). It seems I could have just defined macroecology is stylized facts about large scales.

  4. I think it’s because we don’t use meta-analyses the way we should use them. Too often, meta-analyses end up being a confusing and complicated summary without a clear take-home message. Or with vague, qualitative take-home messages like some of the examples above. Every meta-analysis I’ve done has suffered from this to a greater or lesser degree. But, live and learn.
    In my mind, meta-analyses should be used to identify the current state of our understanding. That implies a model or set of models at the end of a meta-analysis that capture the independent variables that allow us to predict the dependent variables of interest, the functional relationships between the independent and dependent variables and the parameter estimates associated with each independent variable. And the bold claim that the model or models is the best approximation of our current understanding for a particular dependent variable.

  5. A tentative thought about why meta-analytical results don’t inspire more theorizing: the covariates in meta-analyses often are not the sort of thing one can easily theorize about. For instance, if you show that the average strength of X varies among biomes (desert, tundra, forest, etc.), well, how are you supposed to model that? Biomes differ from one another in a bazillion ways, not all of which can be summarized in a single parameter or small number of parameters.

    As I say, this is a tentative thought, and maybe it’s just wrong. For instance, species of small and large body size could also be said to “differ in a bazillion ways”, but that didn’t stop West, Brown, and Enquist from coming up with a model to explain 3/4 power scaling of metabolic rate and body size.

    • I think is is probably true. Many meta-analyses only have categorical covariates (continent or taxonomic family or order or class). It will be hard to develop theory as not only as these categorical (making hard to say much more than things do or don’t differ) but they’re basically proxies for historical contingency. meta-analysis with body size, productivity, etc should give better results.

  6. Jeremy,

    If you equate theoretical insight with prediction, you might find some examples where meta analysis leads to testable hypotheses in the statistical sense. Whether the question you are testing is really basic ecology or something more applied of course depends on what effects you are summarizing.

    Another point is that sometimes only effect sizes and their errors are integrated into a standard meta analysis framework. But few times the data from multiple studies are integrated to answer the same question. The latter actually allow some statistical shrinkage to happen, so that information is better used (and of course data is used instead of estimates or summary statistics).

    Here are few examples (excuse the self promotion, but I obviously I am most familiar with my own work). Most meta analyses on species-area relationships (SAR) focus on individual studies (e.g. Drakare et al. 2006) without integrating the data, however a unified model provides opportunities for study or island level predictions of future observations, and it also turns out that linear SAR is not a 2 but a 3 parameter model if you include the error term which is also varies w.r.t. predictors (e.g. Solymos & Lele 2010 or Patino et al. 2014).

    My other example is a paper in the Condor where we integrated data and results from different studies to answer a very specific applied research question: how does energy sector related development affect abundance of birds. Or there is the whole evidential conservation approach by Andrew Pullin et al. (http://www.conservationevidence.com/) is also full of meta analyses very directly leading to best practices. And this is where the real work often begins by culling out invasive weeds etc., so in a way it serves as a jumping point.

    Maybe it is more of a characteristic of applied ecological research where the question to begin with is very well/narrowly defined? SAR might be an outlier here, but these examples bring up new insights and provide testable predictions.

    • I don’t equate meta analyses with theoretical insight, and I don’t think a meta-analysis that fails to yield theoretical insight is a failure. I do object to lack of clarity about what a given meta-analysis achieves or could be expected to achieve. For instance, as Brian notes, if you think a meta-analysis that uses covariates like biome or continent is going to lead to theoretical insight, you’re probably mistaken. That’s not the way to discover stylized facts that lead to theoretical insight.

  7. A topic close to my heart, so several thoughts:
    1) Jeremy’s cartoon sketch certainly applies to some ecological meta-analyses, but not all. Many ecological meta-analyses are testing specific predictions of ecological theories, and I would argue that by supporting or refuting these predictions they lead to important theoretical advancements (examples include for instance meta-analyses testing predictions of the Janzen-Connell hypothesis and meta-analyses of many plant defence hypotheses) as the hypothesis in question might need to be modified or altogether abandoned and new hypotheses need to be developed. Other meta-analyses, as Jeff pointed out, are done to combine empirical knowledge on a particular question, often of applied importance, so no theoretical insights are expected there.
    2) Jeremy’s point 4 about small effect of covariates: to my knowledge in most ecological meta-analyses very important covariates/moderators are revealed to the extent that the mean effect is not meaningful and you really need to consider subgroups separately. This certainly has been the case in almost all of my meta-analyses, and I would also argue that in many cases showing strong effect of a particular covariate may lead to theoretical advancement. But interestingly, probably the only meta-analysis in which I have been involved which did not find any important covariates is also the only one which established something akin to a “stylized fact” – low population size in plants is associated with lower fitness http://onlinelibrary.wiley.com/doi/10.1111/j.1365-2745.2006.01150.x/abstract. As regards moderators such as e.g. biomes or continents, perhaps it is true that meta-analysis using such covariates is less likely to lead to further theoretical insights, but such moderators are often used for other purposes, e.g. to reveal knowledge gaps or test for geographical bias and to inform future primary research.
    3) Jeremy’s point 5 – I don’t think we should see a meta-analysis on a topic as an end-point. A good meta-analysis suggests new avenues for primary research by revealing knowledge gaps and covariates which are worth testing more specifically in primary studies. There are many examples where several meta-analyses have been done on the same topic either more or less simultaneously or after considerable time gap (e.g. with focus on different moderators). Of course sometimes meta-analysis simply confirms that there has been enough research done on the topic and the effect has been consistent across studies, so there is no point to pursue research in this area anymore.

    • Thanks for your comments Julia, great to have the thoughts of someone who knows a lot more about this than I do!

      Yes, meta-analyses that aim to test existing theoretical predictions are far from rare. My post doesn’t apply to such meta-analyses. And yes, as Jeff and Peter noted, if the meta-analysis isn’t done with any expectation of leading to theoretical insight, that’s fine (e.g., Peter’s example of meta-analyses addressing very specific applied issues).

      Re: your 2, this kind of gets back to a topic I’ve discussed previously in another context: when it’s best to think of an average (here, the mean effect in a meta-analysis) as meaningful vs. merely a meaningless epiphenomenon: https://dynamicecology.wordpress.com/2015/05/14/in-a-variable-world-are-averages-just-epiphenomena/ For instance, this issue comes up in debates over whether an average allometric scaling exponent of 3/4 is a biologically-meaningful “baseline” from which particular taxonomic groups might deviate, or whether it’s biologically meaningless average of “true” exponents that vary from one taxonomic group to the next.

      Also re: your point 2, that’s very interesting about the one “stylized fact” arising from your meta-analyses!

      Also re: your point 2, yes, fair point about using moderators to reveal knowledge gaps. Although I could imagine ecologists of a theoretical bent pushing back by asking how important it is to close such knowledge gaps, if closing them isn’t likely to lead to theoretical insight.

      Re: your point 3, ok, perhaps we shouldn’t see meta-analyses as an endpoint (unless their results reveal very consistent outcomes across many studies). But in practice isn’t that often how they’re seen? I admit I’m just going on gut feeling here. I haven’t systematically looked at, say, the rate at which studies on a given topic are published before vs. after the first big meta-analysis on the topic. That would actually be kind of interesting exercise, now that I think of it… Of course, even if it did reveal that meta-analysis often are associated with endpoints (or at least tailings-off) of research programs, it would be very hard to say whether the meta-analysis *caused* the endpoint, vs. merely *marked* the endpoint. No doubt research programs tend to wax and wane for reasons that are at least partially independent of whether a meta-analysis of their results happens to have been published.

      • Hi Jeremy,
        regarding your last point – yes, I totally agree that looking at “the rate at which studies on a given topic are published before vs. after the first big meta-analysis on the topic” is a cool idea and needs to be done. And perhaps not just looking at the rate of publication, but also type of studies published (e.g. primary studies, opinion/forum type of papers, modelling papers etc). I would love to know, for instance, whether my old meta-analysis on carbon-nutrient balance hypothesis made any difference to subsequent studies on the topic. This sort of before/after comparison might help to address the question as to how often meta-analyses change the trajectory of ecological research.

    • Some discussion of that in an upcoming post. I’ve been doing a bit more reading on “stylized facts” and have been somewhat revising my understanding of what economists mean by the term. Economists’ stylized facts tend to be more stylized than most of the ones on that list. The canonical examples of stylized facts fall in between complete speculation and well-established facts.

  8. Love this post and discussion. I have some points related to but diverging from things both Brian and Julia said, so, new thread!

    1) Covariates – I think this is *really hard* and *really important* with respect to testing theory. The more we can incorporate in a careful and well reasoned fashion, the more we can test competing theories. I think of things like Hillebarand et al 2009 Eco. Let. looking at when and where stoichiometric versus metabolic constraints are important. These tests of theory can only be done with a sufficient range of data and a sufficient range of covariates, both of which we often lack even in the most well designed experiments.

    Further, I think the recognition of this need to increase our power in testing theory by bringing in covariates then has some really neat follow on effects when, as a meta-analyst, you recognize that you just can’t do it well with the data at hand. That’s one of the big factors contributing to the rise of networked ecology and experimental networks – often which start with a meta-analysis (or attempt at one) as a first attempt, realize the limitations, and then keep moving to something wonderful. So, in that sense, they can be incubators. NutNet, ZEN, KEEN – all at least have some roots in a meta-analysis or attempt at one. (Or at least the working group that brought the right people into the room where it happened.)

    But overall, the search for grand means I’m finding more and more…not interesting? That’s not a good way to put it. Rather, as Brian and I have talked about before, grand means are neat, but it’s what explains the variability – which we can sometimes only do with meta-analyses of globally well replicated experiments – that is really of direct theoretical interest.

    2) I see two non-confirmatory ways meta-analyses can advance theory (when they are even used to address theory-based questions). The first is through rejecting broad classes of theory. Codification of some theories and elimination of others is still fundamental theory-based advancement of science. I’m thinking of the old Hypothesis -> Theory -> Law saw we’re taught in high school. Yeah, that’s not how it works, but meta-analysis does move us along that axis.

    But more importantly, is meta-analysis’s inevitable result of pissing people off when they see a theory rejected. Because it spurs them to ask why. And when it’s not a study or model design kind of thing, often it can push forward new/emerging fields. I’d argue quite strongly that Cardinale et al. 2006 and it’s result vis a vis the sampling effect being dominant smacked a lot of us back, and drove an explosion of rethinking in the field. Explorations of multifunctionality is one of those results, which is a field still pretty hungry for theory. It might not be a huge leap, but it drives at least a group of researchers to revist a question they thought answered in a different light.

    3) One field that I’d love to see advance more is meta-meta-analyses. Hey! Watch it! I can hear your eye roll! If you peak at Hooper et al. 2012, what we were doing was a meta-meta. A meta-analysis of meta-analyses. Essentially, we wanted to ask a question – the relative importance of one driver estimated via meta-analysis versus others – for which we could only get global effect sizes from other meta-analyses. This was totally coarse and the question was totally exploratory rather than theory based – I cop to that! But, what we were able to do impressed me, and made me think more about meta-metas as a tool to bring together disparate fields of Ecology that don’t really talk to each other that much, but are doing very well inside of themselves figuring out important things. I don’t see this very often, but keep wondering if we’re going to turn some corner and see more. And then I hope to see a meta-meta-meta, as it will do my liberal arts/critical-theory-curious enjoying soul proud. Particularly if someone can invoke Foucault.

    4) But to agree with you and have an airing of grievances, my problem with meta-analysis as we currently use it is when we fail to recognize its limitations in addressing theory or broad questions. I see this creeping in more and more across many fields as observational data is brought to the table without careful consideration of resulting study designs, and it worries me quite a bit. Theory cannot be addressed without the right design to answer those theoretical questions, regardless of statistical technique used. Meta-analysis has some serious limitations in terms of what can and cannot be said about its results. That’s where I see the real disconnect with theory these days.

    • Thanks for the great comments Jarrett.

      Re: your #1, good point, although if somebody responds to a less-than-informative meta-analysis by setting up a distributed experiment, I’m not sure how much credit should go to the meta-analysis (and not sure if that’s what you’re arguing). If existing data are inadequate to address the question of interest, I’d give most of the credit to whoever recognized that and then organized the distributed experiment to get better data. Rather than saying “the meta-analysis of existing data failed to answer the question, but at least it spurred the collection of better data”. This same point kind of came up in a very different context in my review of How The Hippies Saved Physics (https://dynamicecology.wordpress.com/2016/09/08/book-review-how-the-hippies-saved-physics/). That book tells the story of a group of low-profile physicists (the Fundamental Fysiks group) who posed some big questions, to which they gave answers that could be shown to be totally wrong based on existing knowledge. But the demonstration of the wrongness of those answers turns out to have very useful practical applications in other areas. The book gives the Fundamental Fysiks group a lot of credit for “asking the right questions” and “shaping the direction of the field”. But I dunno–seems to me that they were just wrong. And if, their wrongness just so happened to lead to practical applications, well, that just makes them lucky as well as wrong. Credit for those practical applications should instead go entirely to the people who first recognized them. Analogously, if people do lots of empirical studies that, when analyzed together, turn out to be inadequate to address a question of interest (say, regarding the determinants of the strength of trophic cascades), well, it seems to me that the credit should go to whoever recognizes this and does something about it. (note that I’m not denying that it’s often difficult or impossible to recognize the need for new data until after one has discovered the inadequacy of existing data as revealed by a meta-analysis. I’m just making a possibly-pedantic point about how we should apportion credit after the need for new data has been recognized and the new data have been collected.)

      Re: your #2, what I’m questioning in the post is how often this happens.

      Re: Cardinale et al. 2006, huh. You clearly move in different circles than me. I thought everyone knew that transgressive overyielding is fairly rare (which is all that Cardinale et al. 2006 showed; rarity of transgressive overyielding is of course perfectly consistent with polyculture production typically being determined by a some mix of selection and complementarity effects). Not saying you’re wrong about the effects of Cardinale et al. 2006 on people’s thinking. But if its effects were as you describe, I’m surprised.

    • A further thought re: pissing people off when your meta-analysis rejects a theory they like, thereby spurring them to ask why and thus leading to new and better science: clearly you have a more optimistic view of human nature than me. 🙂 Anecdotally, people who are pissed off by your meta-analysis (or your big distributed experiment, or your global-scale data compilation) tend to react mostly by either trying to poke holes in your results, or by going out and collecting more of exactly the same sort of data that’s been collected before and analyzing it in their own preferred way so as to obtain the result they think “should” be there. See the arguments over Mittelbach et al.’s meta-analysis of diversity-productivity relationships, the reactions to Vellend et al. and Dornelas et al., and the reaction to Adler et al. 2011.

      In fairness, I can think of an exception: Gary Polis and some other food web ecologists getting pissed off at (or at least not believing) conclusions from compilations of food web data culled from the literature, and responding by going out and collecting better data. Which ended up substantially revising conclusions based on the original compilations.

      I think it would be an interesting but challenging comparative study of the sociology of science to try to study (and even predict?) how often people will react in way X to the publication of a meta-analysis/distributed experiment/big data compilation rejecting (or otherwise bearing on) some important theoretical or empirical claim. What governs whether people react by abandoning the claim, vs. trying to defend the claim by poking holes in the analysis, vs. trying to defend the claim by collecting more of the same sort of data people have already collected, vs. being spurred to collect a different sort of data (e.g., spurred to do a distributed experiment), vs. deciding to just go work on something else entirely, vs. other options I haven’t thought of?

      It’s a bit like Hirschman’s work on “exit, voice, and loyalty” (which I admit I only have a cursory knowledge of, so what I’m about to say might be inaccurate). Members of an organization that’s going downhill in some fashion have three choices. They can leave (exit). They can give voice to criticisms of the organization, hoping to change it for the better. Or they can remain loyal, continuing to work within the organization as they always have (and possibly defending the organization against criticisms and arguing that nobody should leave). The interesting question, sociologically, is what determines a given individual’s choice among those three options.

      • OK, a personal story here: the first peer reviewed paper I published was a rather weakly-argued, speculative review that took the position that flowering time in most plant species was not under strong selection (Ollerton & Lack 1992). It took me almost 20 years to test this properly when Miguel Munguía-Rosas worked with me as a post doc and he plus other Mexican colleagues and myself performed a meta-analysis of all of the studies that had accumulated up to that point (Munguía-Rosas et al 2011).

        Turned out I was only half-right: flowering time in lots of species _is_ under strong selection, but it varies a lot depending on latitude, growth form, and whether or not you consider absolute versus relative flowering time. I was quite happy to have proved myself not-quite-right, it was a really nice collaboration and interesting findings. But I wonder if I’d have been so sanguine if it had been someone else who had published that analysis? I’d like to think so, but who knows?

        Here’s the references:

        Ollerton, J. & Lack, A.J. (1992) Flowering phenology: an example of relaxation of natural selection? Trends in Ecology and Evolution 7: 274-276

        Munguía-Rosas, M.A. Ollerton, J. Parra-Tabla, V. & Arturo De-Nova, J. (2011) Meta-analysis of phenotypic selection on flowering phenology suggests that early flowering plants are favoured. Ecology Letters 14: 511-521

  9. In line with many comments above & especially Julia’s point 3: perhaps the issue is more that we expect meta-analyses to provide the answers that they can’t. I think meta-analyses are useful to identify current knowledge and point toward areas that need more empirical studies. They’re not the foundation of general rules of ecology.

  10. Pingback: Stylized facts in ecology | Dynamic Ecology

  11. Pingback: EdGE meeting – meta-analyses, theory and stylised facts in ecology – EDEN – Edinburgh Ecology Network

  12. Pingback: Meta-analyses, theory and stylised facts in ecology | Tundra Ecology Lab – Team Shrub

  13. Pingback: Meta-analyses, theory and stylised facts in ecology – Gergana Daskalova

  14. Pingback: Why aren’t ecologists prouder of putting old wine in new bottles? | Dynamic Ecology

  15. Pingback: Why do so many ecologists overestimate how informative small meta-analyses are about the mean effect size? | Dynamic Ecology

Leave a Comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.