Note from Jeremy: this is a guest post from Mark Vellend.
********************
A couple weeks ago I finished checking the proofs for my book (to be published in August – available for pre-order now from Princeton, Amazon or Indigo!; earlier related posts here and here), and I was struck by how reading something even for the 100th time can still prompt new trains of thought. It’s often a question of timing: this time I had recently browsed some links Jeremy pointed to about the “reproducibility crisis” in the social sciences, which struck a chord. One piece in particular identified as a core concern the fact that decisions about “data selection and analysis” often occur after the data have been collected in a given experiment, which introduces the potential for subtle, unconscious biases in favour of results that are in line with the preferred hypothesis. I was surprised and intrigued to learn that this is in contrast to pharmaceutical studies, which are apparently required to “register” all methodological details before a drug trial is conducted*.
How important a concern is data selection/analysis bias? This seems open for debate (see here, here, here and here), but for the sake of argument let’s says it’s just a minor concern each time it happens. The problem for ecology would be that decisions about data selection and analysis happen after-the-fact (at least partly) almost 100% of the time. So, if many little study-specific biases create large discipline-wide biases, then maybe we have a big issue. (Or maybe biases in different directions balance one another out? Or maybe the social-science reproducibility crisis is overblown?) This is where things collided with the content of my book. In the book, I attempt to draw general conclusions concerning the empirical support for hypotheses based on high-level processes. For example, how important is negative frequency-dependent selection among species in determining community dynamics? How does dispersal influence local diversity? I tried to take an unbiased look at the literature, but how biased is the literature itself? I’m not sure, but I sure got to wondering.
To be perfectly clear, I’m not pointing the finger at anyone out there anymore than myself: to the extent that there’s a problem (love to hear thoughts on whether there is or isn’t), it’s a collective problem. For example, graduate students are routinely encouraged to include in their proposal documents plans for data analysis, but these are not treated as much more than a way to gauge whether the student seems to generally know what they’re doing. Things always look different in the end, and there are excellent reasons for this: (i) by the time the data are in, the stats experts (shall we call them “R-machos”?) will have changed their minds (probably more than once) on what constitutes “correct” or “best practices”; (ii) committee members and/or reviewers don’t agree to begin with about the best statistical methods, so you try this and that; (iii) surprises routinely arise during a study that have important consequences for analysis (outliers, important but unanticipated covariates, an animal dug up half your plots, a treatment didn’t “work”). And that’s just for projects that involve primary data collection. The entire field of macroecology is based on analyses dreamed-up for data that already exist.
If we back up a bit, there are a great many points during the scientific process at which bias might creep in:
- What system to study? Let’s say you want to study trophic cascades. You find a system in which a strong cascade seems quite likely, or where we already know it to be present. Even in the latter case, you state that we already knew there was a trophic cascade (your studying a mechanistic aspect of it), but the paper still enters the collective consciousness as additional confirmation that trophic cascades are widespread, strong, and important. Or maybe you pick a system because it’s “tractable”, but does whatever makes it tractable (e.g., short generation time) also make it an outlier with respect to the process/phenomenon of interest (e.g., “rapid” evolution)?
- Which of several projects to invest energy in? You start a few pilot projects, one of which looks like it will reveal a big and really interesting effect of whatever (a predator, dispersal limitation, etc.), and so that’s the one you invest in. Definitely the right decision for your career. Definitely a source of bias in the literature writ large.
- Which of several analyses to report? The experts don’t agree, so you try several things. They all show the same basic pattern, so in the main text you report the one that shows things most clearly (e.g., biggest effect size). Even though you put everything on the table by reporting other analyses in the online supplements, that slightly bigger effect size is what lives on.
- Which of several potential manuscripts to invest in? I had a fascinating discussion with a colleague recently who was quite upfront about being hesitant to publish anything that could be used by someone arguing in favour of an activity (e.g., land development) that goes against the goals of conservation. On the other hand, the manuscript reporting results that could be used to justify conservation is at the top of the priority list. How often does that happen, and how does it influence our credibility? (Jeremy adds: related post from Meg here.)
- Which papers get accepted and in what journal? Tons written about this already, so suffice to say publication bias can happen, albeit maybe as just one source of bias along the way.
So what does all that mean? One can imagine someone looking for a cheap headline equating all this with a crisis of great proportions. But as with many things, it’s a matter of degree. We can never eliminate bias completely from any human endeavour, but we can acknowledge and try to be aware of the different sources of bias that influence collective wisdom, and try to evaluate their influence. This seems like a manageable issue, albeit a difficult one (e.g., detecting some sources of bias is not as simple as a funnel plot). This whole train of thought does, however, raise some interesting questions of immediate practical importance:
- Should we place more emphasis in ecology on committing to a particular kind of analysis before data are collected? Short of formal registration, the most obvious way to do more of this is via graduate student supervision. Of course not all projects are done by grad students, but a great many are, and there’s a formal oversight process already in place. Is changing course during analysis good science or a slippery slope? For related suggestions and ideas see posts by Meg (here), Jeremy (here and here) and Brian (here), including comment sections.
- Would you rather the primary data in a meta-analysis be from studies that were not designed to test the meta-question or from studies that were designed for this purpose? I sometimes feel better about using data collected for other purposes, because it means several sources of bias are eliminated, but I’ve heard the opposite argument (albeit not with clear reasoning, just an assertion that this is “not good”). Thoughts?
- In teaching (and writing books), how do you balance the desire for the material to be interesting (clear messages – trophic cascades rule the world!**) with the desire for balance (context dependence – trophic cascades may or may not be important depending on these 10 factors). I think there was a post on this at one point.
I’m quite curious to know what people thing about this…
* I have not done extensive homework here – just relying on a few blog posts (this one in particular) for this information. That said, I think the thrust of my reflections here are probably valid, regardless of the facts involved in the social-science debate and pharmaceutical studies.
** I don’t mean to pick on trophic cascades – I’ve just heard more than one colleague (who know better than I do) that books tend to present the famous examples as being more generally applicable than is the case in nature.
Registering your specific hypotheses and analysis plans before diving into the data doesn’t have to be an overly formal or onerous process. The real benefit is of making clear what, exactly, was specified ahead of time (confirmatory hypothesis testing), and what unexpected possible relationships exist in the dataset (exploratory hypothesis generating). The important rule to follow is that when conducting null hypothesis sig testing, the same dataset that is used to create a hypothesis can’t be used to test it, and more often than we realize, that happens. Decisions about outliers, measures, and comparisons usually arise after looking in the data, which are of coursed biased by the incentive to find a publishable result at that point.
Aspredicted.org has a nice short preregistration system, and our Preregistration Challenge (https://cos.io/prereg) will pay you $1,000 for publishing the results of a prereregistered study. The purpose of those prizes is to encourage folks who are not familiar with the process to give it a look.
Very interesting – thanks for sharing. Where on earth does the million dollars come from to pay 1000 people each $1000 for doing this? Anyway, I think we can agree on the benefits of specifying expectations and plans ahead of time. But creating incentives to make it part of general practice seems like a huge challenge. $1000 does the trick, but that’s not sustainable!
Meh, they won’t have to pay out. There is fine print, including where you have to publish, and there aren’t any ecology journals on it (and $1k doesn’t even cover PLoS One costs).
The idea for ecology seems pretty useless except for really large experiments (like LTER) or lab experiments, as much cool research in ecology seems opportunistic (let’s look at recovery after a fire burned half my plots, etc.).
Rant over.
The money comes from a grant from the Laura and John Arnold Foundation. It’s obviously not designed to be a long term funding source, but rather as an education campaign so that individuals can gain experience with the process, see the benefits it offers to one’s workflow, and provide feedback to the community to make it better.
There are currently about 500 journals on the list of eligible journals for the competition (https://cos.io/preregjournals/), and we are close to adding about 50 leading journals in ecology and evolution, I will definitely make announcements to this community when that comes through.
As for opportunistic research, a lot of that still uses NHST, and as long as you have enough time to specify what tests you’re going to confirm, you’ve got enough time for a preregistration. It’s very likely that other interesting trends will emerge from that process, but preregistration simply makes clear what hypotheses were tested on a dataset, and what hypotheses were created from that dataset.
Interesting. Thanks for the additional context, David.
Nice application of the replicability crisis to ecology.
I guess one thing I’ve always wondered is if we focused more on effect size and r2 and only published papers with large effect sizes or high r2, wouldn’t the replicability crisis go away? All these ways of biasing things you mention are real, but to me seem to be mostly of small effect. Thus they can easily trump something ecological of small effect and tip something into p0.60).
Medicine has gone this way in response to their own replicability crisis (which happened a bit before the one now in psychology). Their effect sizes are almost all odds ratios and journals now have guidelines on what odds ratios are too small to publish even if they are statistically significant.
Do you worry at all that that just makes a fetish of some effect size threshold or R^2 threshold, the way P<0.05 arguably is a fetish now?
Also, tell me again what a "small" effect is? https://dynamicecology.wordpress.com/2015/11/05/whats-a-small-effect-anyway-and-when-are-they-worth-caring-about/
Not trying to be difficult, I think there's merit to this idea. But if you're going to have formal guidelines, you need to work out details like these.
Give people a number and they will figure out a way to oversimplify and abuse it (https://dynamicecology.wordpress.com/2015/05/21/why-aic-appeals-to-ecologists-lowest-instincts/). But no I don’t think we would end up with just a different fetish. There are some concrete reasons that I put forth why I think large effect sizes trump researcher degrees of freedom, whereas p-values exacerbate researcher degrees of freedom.
Clearly, for each system we need to work out what is a biologically large or small effect and make sure we’re measuring across a suitable gradient, or sufficiently strong experimental manipulation, whatever. Then, statistically, I think one good definition of “large” or “small” effects comes from degree of shrinkage under a reasonably rigorous regularizing prior (or Lasso). Say, under a Laplace prior, a “large” effect will be mostly left alone, but a small one could have mode shrunk to zero. The art is to combine the biological definition with this statistical signal-noise type of definition. So rather than an arbitrary threshold for size, I’d rather a more continuous evaluation based on degree of shrinkage.
…My $0.02
“only published papers with large effect sizes” – maybe I’m missing something, but doesn’t that just create a huge bias of it’s own (lots of buried “negative” results)?
Maybe top journals publish things with R2 > 0.7, middle journals publish things 0.4 > R2 > 0.7, and the other journals get the remainder. There are journals that publish exclusively negative results already.
From a meta-analysis perspective, yes. But we might need fewer meta-analyses if we focused on decisive results with large effect sizes. And maybe your right that “only publish” is the wrong approach. But I think a large effect size/big r2 has real value in avoiding researcher bias.
One thing that often crosses my mind when looking at metaanalyses is the underlying bias in scientific publishing. If we work on the assumption that negative results rarely get published then logically there is bias in all metaanalyses of studies as you are only looking at those studies where there was, for example, a relationship between x and y. We often dont know how many studies have looked for a relationship and found none and so our sample of studies may already be only a small subsample of all the studies that have looked into this and is therefore by definiton biased towards publishable results.
Absolutely, although this “file-drawer problem” is one that’s been tackled, if imperfectly. A systematic absence of negative results, especially when sample size is small, should be detectable statistically (see the funnel plot link in the original post).
And on your last point – balancing clarity of message vs honesty about conditionality, I’m all in on we have to embrace the condtionality of results in ecology: https://dynamicecology.wordpress.com/2016/03/02/why-ecology-is-hard-and-fun-multicausality/ I think we have to do this at a research level before we can do it at a book level though. Or maybe a book (or meta-analysis paper) is just the time to find the patterns of conditionality? I don’t know I’d have to think through the sequencing more.
Well, I see two missing points on possible bias. The first one is the place of the world where you are studying. Processes and systems can have great differences, and many times, very clarifying from place to place in the world. Even when you are focused on studying a system relatively well known, there still many gaps that create possible bias. The second is the funding, I claim that many places/systems/methods are not being analyzed because of the lack of funding. And, if we look beyond the US, the funding many times comes from local science agencies that have their own views on what is “important” to look for.
Good points. I’d say this falls at least loosely under the “tractable” criterion. With limited resources, we can more easily study places that are close to where we live, and there are far more funded researchers in the temperate zone than in the tropics. Those biases need to be recognized.
Well, we shouldn’t expect much progress in this regard if we always completely miss the order in which science is meant to be conducted. Scientific process takes the form observation > hypothesis > prediction etc. But this process is often violated. In grad school, I had people asking me for my hypothesis when I have not made any observation. We rely on grossly imperfect literature as a proxy for observation and we also validate our findings with this literature. It’s a fact that most research is conducted by grad students and most graduate students (at least in ecology) have not been to their field sites (or wherever) to make an informed observation before they have to come up with their hypotheses and under the influence of their advisors. So in essence, their thinking is an extension of their advisors’, which may also be faulty. This is why research process in many disciplines take the form of an ad hoc procedure. It’s a self-perpetuating cycle because this is how most advisors were trained and they just to keep the tradition alive. Imagine if Darwin had sited in Europe and start theorizing about systems of the world.
Two conclusions can be drawn from the fact that many hypotheses go unsupported. Either that scientists have bad judgment or hypothesis testing is not a useful way of directing a research. Heck, for the aforementioned reasons, I’m going to say that many scientists have bad judgment. I’m not a fan of hypothesis testing either, because I haven’t seen any hypothesis that cannot be written as a question and this way, people are not emotionally attached to their positions and would more readily accept ambiguity and appreciate nature’s complexity. Furthermore, although we are in the digital age, nature is and will always be analog and stats or math cannot replace keen observation and critical thinking. Until we change the current culture, expect more zombie ideas and decades of meaningless arguments.
We might be veering away somewhat from the topic of bias, but the point about natural history observations providing a critically important basis for hypothesis generation is well taken. I’m not as pessimistic about everyone chasing theoretical ideas with only weak connections to nature – that happens for sure, but a great deal of ecology follows what I think you would consider the “right” process.
NOTE: 2nd year grad student so read at your own risk:
I think there are some potentially great benefits to meta-analyses based on primary data from studies that did not evaluate the same question the meta-analysis is addressing.
First, if the original studies address different questions, then bias related to the response in the meta-analysis may be less of a problem.
Second, often studies aggregate and handle data differently. If you can get your hands on the raw data, you could potentially standardize data aggregation in the way most appropriate for your analysis.
Third, effect sizes reported in published studies may emerge from different types of analyses (think all the ways people analyze diversity) that may obscure meaningful interpretation of meta-effect sizes.
Of course, each of these potential benefits comes with potential opportunities to insert your own bias (how to aggregate the data, which studies to include, do I use hedge’s d, log-response ratio, etc.). And of course there are so many other unmeasured factors that vary across studies that could call into question the validity of the meta-analysis results. There are of course imperfect ways to measure these (among-study heterogeneity analyses, influence of modifiers, etc). Careful meta-analyses that make clear the justification for decisions at these steps, and the limitations of the results (especially given the data available), I think are still very useful. At the very least they provide the information necessary for reviewers and readers to critically evaluate the interpretations and come to their own conclusions.
What are other benefits or problems associated with meta-analyses, especially when they are based on new analyses from the raw data, rather than published effect sizes? Are the three I listed above really benefits or do they add more complications (opportunities for bias) than they are worth?
Thanks Chris – 2nd year student input most welcome!
I think there’s at least two distinct issues here:
(1) Meta-analyses based on data collected for other reasons. I agree, there are advantages.
(2) Can you get the raw data? This is an orthogonal issue. It’s always preferable to at least have the option of working with raw data, regardless of why they were collected.
There are actually different opinions on what constitutes a “meta-analysis”. Re-analyses of raw data don’t actually count according to a narrow definition. Take the NutNet distributed experiment: raw data on many individual experiments analyzed together, but not what you’d really consider a meta-analysis. Hierarchical, yes, but “meta”, not necessarily. If every study is first summarized by a single effect size (a single “analysis”), and those effect sizes are analyzed as statistical observations (the “meta-analysis), that’s the more narrow definition (as far as I know, anyway).
Walking through all the benefits and problems of meta-analyses in general is a pretty huge topic – maybe Jeremy et al. will write a post on that sometime soon!
A half-formed thought/question: tell me again what exactly “bias” means in this context?
I know exactly what it is in the context of classical frequentist statistics. There’s a well-defined population of interest, and you want to estimate some parameter of that population–it’s mean, say. That parameter is assumed to have some fixed but unknown value, which you estimate by taking a sample from the population and calculating the corresponding sample statistic. If your sample is a random sample, meaning that every individual/entity/whatever comprising the population is equally likely to be sampled, your sample statistic is an unbiased estimate–on average, it’s equal to the population parameter. But if it’s a non-random sample, it’s quite possible that the sample statistic will be a biased estimate of the population parameter. On average, the sample statistic will take on some value other than the value of the population parameter.
It seems to me that this isn’t a great analogy for many of the sources of “bias” listed in the post. It’s a superficially plausible analogy that doesn’t really hold up under close inspection. For instance, what’s the “population” of systems that might exhibit trophic cascades? Are, say, microcosm and mesocosm communities part of that population? Does observing a trophic cascade in, say, two different forests count as two independent observations sampled from the population of interest, or is that pseudoreplication because they’re both forests? Does the answer to the previous question depend on how far apart the forests are, or what species are in them, or when they last burned in a fire, or whether or not they’re logged, or what? Or does that question about the two forests not even make sense because the individuals or entities comprising the “population of systems that might exhibit trophic cascades” are species rather than patches of habitat? Etc. And if you can’t say with any precision what the population of interest is, then is it really accurate, or even a useful fiction, to characterize existing studies as a “sample” (biased or otherwise) from that ill-defined “population”?
And before someone says it, no, I don’t think hierarchical models are a solution here. At least, not a complete one. The issue is deeper than that, I think.
Bottom line: when you say that this or that system isn’t “representative”, you need to say what exactly it’s supposed to be representative *of*. And in many ecological contexts, I doubt that can be done with enough precision to be useful. So I’m not sure the whole metaphor of “sampling bias” is the way to go here.
Yes, our picture of how the world is and how it works undoubtedly will reflect those bits of the world we study and how we study them. So that, had we chosen to study different bits of the world in different ways, we’d have gotten different answers to our questions. But I’m not sure the “sampling bias” metaphor helps us think about that.
But I’m not sure of a better metaphor either.
Semi-relevant post: https://dynamicecology.wordpress.com/2015/05/14/in-a-variable-world-are-averages-just-epiphenomena/
A couple points:
(1) I think this is more or less the right analogy. Ecologists (or at least community ecologists) very often want to know whether a given process of phenomenon is of “general” importance, and by this they often mean that rather than being something that only applies in grasslands, it applies also in tropical forests, lakes, coral reefs, and arctic tundra as well. Some difficulty in defining what constitutes the population of things we sample from does not invalidate the qualitative analogy, the same way that difficulty in defining what a community doesn’t invalidate the field of community ecology.
(2) So, what is that population of things? We could go about it in various ways. We could literally pick random geographic coordinates, block off a reasonable system-specific portion of the area there, remove the top predator, and see what happens. Or we could stratify: maybe start with an off-the-shelf biome classification, and randomly sample a few spots within each, or stratify any way you like. Then you just need to communicate what is meant by “generality” or lack thereof, as you say. A given phenomenon or process might be “generally” applicable in temperate lakes (all studies there give a similar result), but not generally applicable across systems (doesn’t happen on land, for example).
(3) And here’s where I provoke Jeremy…no, microcosms don’t count. They might help sort out the (im)plausibility of explanations for what happens in nature, but they don’t tell you what actually happens out there.
(E-mail permission just in to reveal Jeremy’s label on his own comment as “arm-wavy pushback”)
Re: your 1 and 2, that’s a reasonable response, and it’s the one I expected. I’m just not sure how much confidence to have in it. For instance, sticking with your example of trophic cascades, I’m thinking of amusingly opposing interpretations of Spiller & Schoener 1994 (famous example of trophic cascade on Carribbean islands, involving Anolis lizards). If memory serves, Pace et al. 1999 described this as an example of a trophic cascade in a diverse, complex tropical food web, while Chase 2002 called it an example of a trophic cascade on a depauperate island. Or maybe I’ve reversed them (too lazy to go check). Another example: the recent argument over whether we ought to include human-impacted sites when we’re trying to test the “hump backed model” of diversity-productivity relationships. Point is, I think another source of bias to add to your list is “which ‘population(s)’ to consider study X to have been sampled from”.
Re: your 3, let me provoke you right back. 🙂 So, you think Smith et al. 2005 PNAS (http://www.pnas.org/content/102/12/4393.full.pdf) were doing it wrong?
Personally, I think that if you’re trying to generalize and get unbiased estimates, it helps to sample the full range of variation. Which might well be more variation than nature happens to provide, or more variation than happens to occur in whatever range of natural systems ecologists happen to have studied. Sticking with the Smith et al. 2005 example, their microcosm and mesocosm data do indeed make me more confident than I otherwise would be that the true species richness-area curve for algae in nature is linear on a log-log plot over the full range of variation, with the slope they estimated.
Plus, if what you want is *explanations* rather than *statistical generalizations*, then that’s a whole ‘nother ballgame and worries about “bias” in a statistical sense just aren’t relevant. Scientific explanation and statistical estimation are just two different things.
I definitely think microcosms have an important role to play – I draw on lots of microcosm examples in my book. If the results align with what’s found in the field, great, but what would we conclude if the microcosm data didn’t fit the phytoplankton SAR? My first hypothesis would be something quirky going on in the microcosm, and I wouldn’t be super excited to find out what that was. Someone else may well be jazzed to find out – to each his/her own…
@Mark:
What if you find that what’s going on in field system X is different than what’s going on in field system Y? Would you be super excited to find out? Or would you assume that there must be something “quirky” and therefore uninteresting going on in system X?
If phytoplankton microcosm data match the field SAR, I *wouldn’t* assume the microcosms were “quirky”. I’d assume that microcosms and field systems are *different*, with neither being “quirky”. And I might well think it would be quite interesting to find out why. For instance by developing alternative hypotheses and testing them in microcosms! I’d be interested in that *not* because I care about microcosms for their own sake, or because everybody has their own idiosyncratic opinions on what’s interesting. I’d be interested in that because it would be a really good way to figure out why the *field systems* behave as they do.
Let me ask you another question: let’s say the microcosm phytoplankton SAR and the field phytoplankton SAR were different. And then somebody figured out a way to estimate the field plankton SAR back in the Cretaceous, and found it matched the *microcosm* SAR, or matched *neither* the microcosm or current field SAR. Would the microcosm SAR then become interesting to you? Would the current field SAR then become “quirky”?
Even if your goal is to understand nature-as-it-currently-exists, I cannot for the life of me understand why you’d want to restrict attention only to nature-as-it-currently-exists. That’s just throwing away relevant information for no good reason.
But don’t just take my word for it: https://dynamicecology.wordpress.com/2013/06/03/microcosms-guest-post/
(p.s. All friendly, non-rhetorical questions. I’m enjoying our little debate and am genuinely interested in your answers. It’s very interesting to me because I know that you and I are on the same wavelength and share many of the same views, so I’m very intrigued to find an issue on which we seem to differ, perhaps quite a bit. And I find I learn the most from disagreements with people with whom I mostly agree. In my experience, disagreements with people with whom I disagree on everything just result in us talking past one another, so that I don’t learn anything or have any reason to change my views.)
“everybody has their own idiosyncratic opinions on what’s interesting” – I think that explains most of what looks like a disagreement here, but isn’t really. I think microcosms have an important role to play in the scheme of things, but I’m personally mostly interested in what’s happening, what’s happened (e.g., in the Cretaceous), and what will happen on earth, rather than what happens in a lab container. I welcome with open arms results from lab containers that provide insights into what happens (or might happen) in nature. Maybe my negative tone is influenced by the not infrequent cases in which this link has been oversold.
““everybody has their own idiosyncratic opinions on what’s interesting” – I think that explains most of what looks like a disagreement here, but isn’t really. ”
I’m not so sure. It looks to me like we do really disagree on whether microcosm studies can help us understand what’s going on in nature. I think they often can. You think they can’t, at least not very often or to any great extent.
” Maybe my negative tone is influenced by the not infrequent cases in which this link has been oversold.”
Whereas foremost in my mind are cases in which I think microcosms have informed our understanding of what’s happening, happened, and will happen on earth. So perhaps that’s an idea for a future post I should write: a list of cases in which microcosm work has informed our understanding of what’s happening in nature, and how it has done so. That “and how it has done so” bit is important because there are various ways microcosms might help us figure out what’s going on in nature. Some of which are (fairly) unique to microcosms, and others of which are common to any informative work in any system.
The challenge (at least in my mind) would be to write the post so as to make clear that there are general lessons here. I wouldn’t want to just have it dismissed as a cherry-picked list of the only unique/unusual/quirky cases in which microcosm studies have been helpful in learning about nature.
p.s. I’m now very curious to hear how you used microcosm studies in your book. Care to give any examples? I’m guessing that you used them as “demonstrations” or “proofs of principle”, but then said (more or less) “but to find out what *really* happens in nature, we have no choice but to turn to nature”?
I look forward to that post. Examples from my book are indeterminate outcome of competition in flour beetles (drift can happen), competition-colonization tradeoffs (Cadotte paper), competitive exclusion (Gause, Tilman phytoplankton), priority effects (various examples) and consequences of dispersal (various examples). And these are not followed by any text marginalizing their importance. I suppose in each case they serve as proof of principle (not evidence that these things happen in nature), and perhaps their collective presence in my book serves as proof that I see value in microcosms!
I’ll be interested to see what you have to say about that Cadotte paper. IIRC (and I may not, it’s been a while), he uses a rather different definition of “colonization” than the relevant theory uses (his includes post-dispersal population growth), so it’s kind of difficult to relate his results to theoretical work on C-C trade-offs…
And in case it needs saying, I don’t need any proof that you see value in microcosms! I’m just pushing you on what sort of value you see in them. It sounds like you see them as valuable for only a subset of the reasons that I see them as valuable.
This debate is valuable to this non-ecologist. Thanks guys. I admittedly lean (heavily) toward Jeremy’s view. Do results from freshwater trophic cascades generalize to marine? terrestrial systems? the human gut? A lab microcosm? (I’ve talked to ecologists that have a visceral reaction to using the human gut as a model for general ecological principles). I would think some results generalize and some are highly conditional, even within very similar systems. I see a benchtop experiment as just another ecosystem, with some results more general and some more conditional. Given the continuum from large scale natural experiments, to large scale manipulations, to small-scale field enclosures, to benchtop experiments, is there some objective way to demarcate “natural” from “lab”?
“(I’ve talked to ecologists that have a visceral reaction to using the human gut as a model for general ecological principles)”
That’s a lot of what this comes down to. Some (many?) ecologists have visceral gut feelings about what counts as a “real” ecosystem, with microcosms or human guts or whatever not counting *no matter how similar or different they are to ‘real’ ecosystems or in what ways.*
Said the microcosmologist. 🙂
Pingback: A book is everything a tweet is not (but please tweet about my book) | Dynamic Ecology