Note from Jeremy: This guest post is by Britt Koskella, currently a NERC Postdoctoral Independent Research Fellow at Exeter University in the UK. Before that, she did her PhD at Indiana University with Curt Lively, and held an NSF international fellowship. She works at the ecological end of evolutionary biology, mostly on host-parasite coevolution, combining field and laboratory work. And she does a bit of blogging and is active on Twitter.
Both Meg and I are big fans of her papers, so we invited her to talk about whatever she wanted to talk about. She picked a topic near and dear to my own heart, the value of microcosm studies for ecology and evolution. We’re hoping that Britt will write more for us in future, so enjoy the first of what will be multiple posts from her.
Thanks to Jeremy, Brian and Meg for inviting me to write a guest post here, and for all the hard work they put into keeping up such an excellent blog. In his initial email, Jeremy suggested that (although I could choose any topic of my choice) I might consider “how microcosm studies, and model system-based studies more generally, are perceived in evolution vs. ecology.” I must admit that I had no idea there was a subgroup of ecologists who take issue with microcosm-type studies until I read the papers and blog posts Meg and Jeremy suggested. (I guess I have really lucked out with reviewers!) Indeed, the recent article that Mike Brockhurst and I wrote for TREE  on the power of experimental coevolution did not include any defensive language whatsoever, primarily because we did not think the approach needed to be defended. So I decided that, in this post, rather than responding to standing criticism regarding experimental microcosm approaches in ecology and evolution, I would just lay out my naïve enthusiasm for them.
When I read theory papers, I think hard about whether I agree with the assumptions being made that underpin a given model. There are always assumptions. For example, some evolutionary models assume that selection and epistasis are weak (so-called quasi-linkage equilibrium) in order to arrive at an analytical solution. This can work sometimes, but I am always skeptical of the results of these models until I see them backed up by simulations in which selection and linkage disequilibrium matter. All evolutionary and ecological models make assumptions because they are trying to capture biology in a way that is both intuitive and meaningful. The assumptions are often laughable for those of us who study natural systems, but we stop laughing when a complicated phenomenon can be explained in a very simple way. The key point here is that no one (I hope) would argue that theory is useless because it glosses over interesting biology. Indeed, I would say that theory is useful precisely because it does just that. When a very simple model can explain a very complicated phenomenon (such as the maintenance of sexual reproduction) despite all of the assumptions being made, it is among the most satisfying types of papers to read. Am I then convinced that we don’t need data from the real world? Of course not. Can I then design a better and more informed study in the real world with which to examine this phenomenon? Most certainly.
So let’s apply the same logic to microcosm studies. Every scientist appreciates that the world is complicated. Too complicated to be fully explained by models or simulations or even our most sophisticated of statistical approaches. However, within that complexity are some really nice, simple processes that can lead to certain patterns, but which sometimes do not (i.e., when other processes are at work as well or given the stochasticity of nature). As a scientist, my approach has always been to ask first about patterns in nature, and then to see whether I can generate that same pattern in the lab using as few processes as possible.
For example, when I started my PhD with Curt Lively I was enthralled with the great data that he, Mark Dybdahl, and Jukka Jokela had collected from snails and trematodes living in lakes in New Zealand . They had shown (and continue to build on this great evidence) that trematode populations are constantly evolving towards increased infection success on the most common snail host genotypes in the population. Since these trematodes castrate their hosts, this then selects against these common host genotypes and gives rare hosts the advantage. These data are exciting for two reasons: 1) they suggest parasites can maintain genetic diversity in their host populations, and 2) they reinforce the theory suggesting that parasites confer a strong advantage to sexually reproducing hosts over those reproducing asexually (as the latter will not be able to generate rare or novel genotypes as readily to escape the coevolving parasite population).
The alternative interpretation for this result is that something else (be it the abiotic environment or another biotic selection pressure, such as fish) is driving change in the host population, and that the parasite population is simply adapting to that change. To rule this out, we needed to move into the lab, so that’s what Curt and I did . We brought a subsample of snail and trematode populations into the lab, divided them into 16 cattle tanks and allowed them to either evolve or coevolve over five and a half years, whilst controlling for all other abiotic and biotic selection. The results confirmed the pattern seen in the field; tanks that were given parasites year after year had lower frequencies of the initially common host genotype relative to those that had not received parasites. Thus we could say that the pattern of rare host advantage observed in the field could be explained by parasite-mediated selection alone.
I use this example to illustrate this point: microcosm studies are not meant to imply that the world is simple; we know it is not. They are meant to fill in the grey area between theory and nature. To test whether the amazing patterns we observe in the natural world can be explained, at least in part, by specific processes. And this is why I think they are not just useful, but imperative.
Of course, not all microcosm experiments boil things down to the simplest form, and indeed not all microcosm experiments focus on model systems. Studies that add complexity to microcosms often demonstrate that patterns found in the absence of complexity are reversed or muddled in the presence of increased complexity. For example, bacteria and phages that are coevolved in a test tube become more resistant/more infective over time, such that each retains the ability to resist/infect antagonists from the past (so-called ‘arms race’ dynamics) . However, when you increase the complexity of the system by adding in the natural community of bacteria that typically live in soil, a very different pattern emerges. In this case, bacteria are most resistant to their current phages, but do not retain resistance against phages from the past (and vice versa) . This dynamic, known as ‘fluctuating selection’, is more parsimonious with the observed local adaptation we find in the field [6,7], and it suggests that the added complexity is actually a key explanatory variable in describing the natural patterns of bacteria-phage interactions. However, we would never be able to make that statement without the initial microcosm experiment in the absence of complexity.
And the same phenomenon happens in the theoretical literature all the time, where simple models are expanded to incorporate more biological complexity and often show very dramatic differences in the outcome. But again, I would never argue that the simpler models were useless. Instead they were the key step in the process, laying the foundation for all other models to come and allowing new models to argue that it is the addition of complexity that is central to explaining the pattern. So I would argue that, just as theoretical models are working towards explaining natural phenomenon in as few steps as possible, microcosm studies are boiling ecology and evolution down to its most basic parts and then slowly adding each step together to see how much we can explain with how little. And I would go further to argue that I never fully trust a natural pattern that has been uncovered with fancy statistical models until I see the experiment to go with it (but feel free to take that apart in the comments section).
Okay, this is a long post already, so I won’t tackle the individual criticisms against microcosm studies – Jeremy has already done a fabulous job of that in his earlier post. I also realize I spend the whole post discussing microcosms, and no time at all on model systems. In part, that is because the entire argument can be remade by replacing ‘microcosm’ with ‘model system.’ If we can show that something happens in a way we expect by using a model system, this does not mean it does happen in nature in that way (or even all the time in that same system), but it does tell you that it can happen! And assuming the question being addressed is one for which a priori predictions were made based on natural patterns and/or theory, I find the results both satisfying and convincing!
 Brockhurst, Michael A., and Britt Koskella. “Experimental coevolution of species interactions.” Trends in Ecology & Evolution (2013).
 Jokela, Jukka, Mark F. Dybdahl, and Curtis M. Lively. “The Maintenance of Sex, Clonal Dynamics, and Host‐Parasite Coevolution in a Mixed Population of Sexual and Asexual Snails.” The American Naturalist 174.S1 (2009): S43-S53.
 Koskella, Britt, and Curtis M. Lively. “Evidence for negative frequency-dependent selection during experimental coevolution of a freshwater snail and a sterilizing trematode.” Evolution 63.9 (2009): 2213-2221.
 Buckling, Angus, and Paul B. Rainey. “Antagonistic coevolution between a bacterium and a bacteriophage.” Proceedings of the Royal Society of London. Series B: Biological Sciences 269.1494 (2002): 931-936.
 Gómez, Pedro, and Angus Buckling. “Bacteria-phage antagonistic coevolution in soil.” Science 332.6025 (2011): 106-109.
 Vos, Michiel, Philip J. Birkett, Elizabeth Birch, Robert I. Griffiths, and Angus Buckling. “Local adaptation of bacteriophages to their bacterial hosts in soil.” Science 325, no. 5942 (2009): 833-833.
 Koskella, Britt, John N. Thompson, Gail M. Preston, and Angus Buckling. “Local biotic environment shapes the spatial scale of bacteriophage adaptation to bacteria.” The American Naturalist 177, no. 4 (2011): 440-451.
Pingback: Guest post over at Dynamic Ecology | Nature's microcosm
Regarding the importance of experiment for uncovering natural patterns, I think that an essential component involves the sorts of species that a researcher is working with, and the overall intentions of one’s research. If a researcher is interested uncovering general ecological patterns without special regard for the species or system under focus, then I certainly agree that experiment is important (if not essential), and that microcosms, etc. are a very useful way of deriving these patterns. On the other hand, as in many research aspects of large carnivore conservation, there are systems that just plain are not amenable towards experimental manipulations, yet require data for on-the-ground action.
In these cases, I suspect that the best we can hope for is to understand what the fancy statistics tell us, and understand what their limitations may be.
Absolutely! I would never suggest that one need bring lions and tigers into experimental microcosms (although it does conjure up a fun image). However, I would say that if we want to understand fundamental ecological theory regarding, for example, the effect of top predator removal on a community, then combining the study of natural communities with experimental predator prey systems offers a very powerful approach.
Ville Friman has done some nice multi-species microcosm work, and has a nice demonstration of one way predators might affect host-parasite interactions:
Click to access Friman%26Buckling%202012%20Effects%20of%20predation%20on%20real-time%20host-parasite%20coevolutionary%20dynamics%20ELE.pdf
Although the results may be somewhat specific to the system being tested, once we have enough data (both experimental and natural) from across systems, we have a good shot at understanding common ecological and evolutionary processes, and their relative roles in explaining the patterns we can observe. Hopefully, this will then trickle down (or, up, I suppose) into policy-level decisions.
Regarding systems that are not amenable to experimental manipulation, the problem remains that you are left with observational data that suggest relationships (weak inference) vs. having experimental data that provide more compelling evidence (strong inference). Immediate action in the name of conservation may sometimes be necessary, but ideally you would evaluate the action. Adaptive management is an approach to understanding large systems through experimental manipulation, with the goal of learning from your management actions in the same way you would learn from an experiment. That said, there are plenty of examples where observational data are all that we have (or can hope for). And even adaptive management does not necessarily equate to experimentation since you cannot completely control the environment – I suppose you can only hope that nothing important coincides with your perturbation of interest.
My point is that regardless of your limitations, I agree with Britt that you still cannot fully “trust” the inferences made from observational data. You may trust them enough to suggest some kind of scientific understanding, and to guide management and policy. But it has to be a short leash.
Really nice post, Britt. I like the common thread of learning through simplification.
NIce post Britt, thanks for doing this.
Thought I’d take the chance in the comments to ask you about attitudes towards microcosms in ecology vs. evolution. You were surprised that anyone would ever have to defend them–whereas I’m surprised that no one you know has ever had to! I’m curious what you think the reasons for that contrast are. Because there’s nothing obvious to do with the subject matter (e.g., ecology is complicated, or is seen to be–but so is evolution). I suspect the reasons are historical. Because natural selection in particular was seen as a slow process that was difficult to study in nature, and because of the the close links between evolution and genetics, fruit flies in bottles have had a key role in evolutionary biology from pretty early in the history of the field. If the people who helped found your field in its modern form work in a system, it would be pretty surprising for the field to later develop so as to reject that approach in any significant way. And more recently of course, there are figures like Rich Lenski. In contrast, ecology doesn’t really have any comparable figures, at least not enough of them to have overwhelmingly shaped the entire field. Gause’s probably closest, but he’s just one guy.
I also wonder if the contrast has to do with the conceptual framing of the fields. Evolution has long had a core conceptual framework specified in terms of a relatively small number of key processes–selection, drift, migration, mutation, recombination, etc.–that operate universally. This naturally leads people to see differences among systems as quantitative rather than qualitative, I think. Some species have large population sizes, others have small ones, some species have high migration rates, some have low ones, etc. Whereas in ecology I think there’s more of a tendency to see different systems as qualitatively distinct, which makes it easy to just write off microcosms as irrelevant to nature.
What do you think?
Thanks for inviting me! First, I suppose I really have been lucky when it comes to not encountering any negative views on my own microcosm and experimental evolution studies. I have heard negative comments about other people’s bacterial microcosm works, but I always assumed that was just due to jealousy (as they all seem to get into Science or Nature) rather than a fundamental problem with the approach.
If I had to venture a guess as to why there seems to be a difference among ecologists and evolutionary biologists in their views on the utility of microcosm studies, I’d say it might come down to the applied natures of the fields. Going back to Tor’s comment above, if you are working to conserve an area or protect an ecosystem then you might be skeptical if someone came to you with data generated in vitro (as it glosses over all the important complexities you would have to deal with in the field). On the other hand, it’s only very recently that the applied side of evolution has been realized, and we still rarely use evolutionary theory to drive policy (unfortunately). So perhaps some of the reluctance in ecology comes from the consequences of thinking you have an answer only to find out you’d missed a key part of the question (e.g. a species you didn’t think was important).
I also agree with your points above…. the field of evolution has a history of trying to find “rules” that will allow us to predict what is likely to happen in a given population under given selection pressures. However, I doubt any researcher feels that their research is applicable to all situations/systems, and so we also have an understanding that different systems are qualitatively distinct. But I suppose one goal is to find general tenets that explain a large proportion of variation, despite all of the idiosyncrasies of particular systems. Is this not also a major goal of ecology? For testing these tenets, it seems that an experimental microcosm is at least as good, if not better, than statistical models of natural datasets. But I would argue that you need both.
Really great and interesting post. Every study system has strengths and weaknesses, and no study system can answer every question. For microcosms I completely agree with “If we can show that something happens in a way we expect by using a model system, this does not mean it does happen in nature in that way (or even all the time in that same system), but it does tell you that it can happen!”.
The research programs I find fascinating are those that start with really simple models to explain complex phenomena, test specific assumptions of models with labs studies, rebuild or modify the models as/if needed, test the improved model with a microcosm study and revise again. Then look to nature for a test of the revised model. It is amazing when a model breaks down what looks like a very complicated phenomena into a couple simple processes. The models that needs very little revision to explain complex phenomena in nature are remarkable. Many times this is not the case, but models revised with data from microcosm studies do a very good job or come very close to predicting phenomena in nature. That being said, models and microcosm studies are designed to predict effects in nature. So testing their predictions in natural systems is the only way to know if they are correct.
I don’t think we should be contrasting one experimental system against another and trying to find the best system to study all questions. We should embrace different methodologies with the understanding that each system is best for different questions. It is undeniable that microcosm studies simplify natural systems and can exclude really important factors that operate at a scales much larger than can be included in these systems. It is also undeniable that ecosystem studies are very ‘messy’ and it is difficult to do controlled and replicated experiments where only the factor of interest is manipulated. The best science relies on the strengths of both systems.
I couldn’t agree more and thanks for the comment.
I particularly like your contrast: “models and microcosm studies are designed to predict effects in nature. So testing their predictions in natural systems is the only way to know if they are correct” and find this the most satisfying way to do science!
There are real strengths of using model systems. For one thing, there are others out there doing the work you really don’t want to do yourself (for me, I never ever want to figure out the function of a gene! But knowing which genes are involved in virulence for the pathogen on which I work is amazing for generating evolutionary predictions.)… and then you can use that data to inform your own work directly. Those few model systems that have researchers from all fields studying them using very different approaches can yield some amazing results. But, as you say, you then need to look elsewhere to tease apart specific from general mechanisms.
I completely agree with you. Model systems are great, there is no way we could look at some mechanisms with natural systems. The best experiments incorporate information from many different sources and you should use every bit of information available to design studies.
Every level is interesting and important, but you can’t study it all by yourself!
As my own comment to Britt notes, I actually don’t agree that “models and microcosm studies are designed to predict effects in nature.” That’s one possible rationale for microcosm studies, which Britt has articulated beautifully. But there are other reasons for doing microcosm studies, which don’t depend on microcosms matching or “predicting” the behavior of any particular natural system, or indeed any natural system. See my old post:
While I agree that there are many reasons to conduct microcosm studies I wonder; why use an artificial system to study community structure if they can’t predict natural community structure? If their results are only relevant to the artificial system they were conducted in then they haven’t explained anything beyond the specific conditions of the study. Results would only be relevant to the particular system studied and the particular conditions of the study. I guess this is technically true for all studies, no matter the system and I may be splitting hairs, but I do think we are trying to explain phenomena that occur in nature at some level.
I read your old post and agree with many of your points. I hope I was clear that I see great value in microcosm studies.
That old post of mine answers your question…
Hmm, interesting thought re: attitudes towards microcosms in ecology vs. evolution perhaps being down to ecology being more applied. That hadn’t occurred to me. I’m not sure how true it is. At least some of the pushback against microcosms in ecology has come from ecologists who do pretty fundamental, question-driven field work…
Another interesting point that occurred to me: your rationale or motivation for doing microcosm experiments is subtly but interestingly different from mine. You do want your microcosms to act as simplified versions of some particular natural system, and find it most satisfying when the behavior of your microcosms matches that of the natural system. Whereas in my old post on microcosms I talked about how I don’t see my microcosms as simplified versions of any particular natural system, and don’t care whether or not they match the behavior of any natural system–but yet still think that my microcosms can help us understand what’s going on in nature. While I still believe in the value of my sort of microcosm experiment, I have to admit that your sort is easier to justify! Which may be another reason you’ve never run into serious pushback–you’re mostly doing microcosm work that’s clearly linked to particular natural systems, and which proves its own relevance by helping you tell a clear story about what’s going on in those systems (evidence from nature, and theory, and your microcosms, all points towards the same conclusion).
Also had another question for you, about how “microcosmologists” perceive one another’s work. I once had a conversation with Rees Kassen, about the work of another person doing experimental microbial evolution. Rees liked this other person’s work, but found it almost too neat and tidy, almost as if the experiments could hardly help but conform to the theories they were designed to test. To be clear, he wasn’t accusing anyone of misconduct or falsifying data! Rather, he was voicing a version of the “microcosms are rigged” or “microcosms just use organisms to solve equations” objections that I refuted in that old post of mine. I was surprised and interested to hear Rees say this, because of course he works in the same system and probably has himself heard these same accusations leveled against his own work by folks working in other systems. So I’m curious: do you ever see microcosm studies that in your view sort of function more as demonstrations than as experiments? More broadly, when you’re reading or reviewing microcosm papers, what sort of flaws or limitations do you most worry about, if any? I think in any specialized area of work (within or outside of science), the stuff that worries “outsiders” about that area of work often is different than the stuff that worries “insiders”, and I think those contrasts are interesting. So as an “insider”, what, if anything, worries you about microcosm work in evolution?
Interesting points/questions! I think the two are somewhat related…. as a general rule, there seem to be the microcosm studies that seek to replicate what is happening in nature, but with more control, and those that seek to test an idea with a system that is amenable to lab studies (regardless of the system’s role/importance in nature). Both are useful and fun to read, but I read them in very different ways. The latter type, I think of as ‘proof of principle’ – one step further than model simulations. These are the experiments that are often greeted with skepticism or described as ‘contrived.’ Part of that is a real criticism because many of the researchers running these studies ‘tweak’ the experiments until they find the result they were looking for.
This sounds worse than it is, I think. If I were building a model, and it spat out a very strange result that I couldn’t explain, I would go back and look at the assumptions and parameters to see what might be causing this weird result. I would then change them around until I understood why certain parameters lead to certain conclusions, and then I would publish the model that I was most confident was a ‘fair test’ and really answered the question I set out to address. This same approach is often taken in experimental evolution. After all, what good is it to publish a result from an experiment you designed that you can’t explain, wasn’t based on any predictions, and is not intuitive?
The other kind of microcosm work, however, does not follow this iterative approach. Instead, if you are seeking to replicate/test a pattern observed in nature, you can bring those same organisms into the lab and control all the background noise to observe the interaction/phenomenon/process you are interested in directly. In this case, you would not tweak the experiment or rerun it multiple times because you don’t have a result in mind… you are exploring, just as you do in a field study. That’s more black and white than I intended, but you get the idea.
However, just to add one final point – microcosm studies that use organisms as a tool to demonstrate that a phenomenon can happen or a given pattern can emerge under condition X might be somewhat contrived, but they are far from a sure thing! Someone (I wish I could remember who!) said to me recently, “if I could rig an experiment to show what I wanted, I would have way more papers than I do.” In other words, even if you run the experiment many times under many conditions… assuming you are cautious in your stats and interpretation, you will never get a result that doesn’t have meaning. The cause has to be there for you to see/demonstrate the effect. Sometimes you just have to be creative in figuring out how to uncover it.
Someone (I wish I could remember who!) said to me recently, “if I could rig an experiment to show what I wanted, I would have way more papers than I do.”
Whoever said that is the most brilliant person in the history of the universe and would make a great blogger. 😉
Your remark about skepticism of what you call ‘proof of principle’ microcosm experiments being distrusted because readers wonder if they’ve been ‘tweaked’ until they yielded the desired result is interesting. I’ve never had that specific objection voiced to me, or seen it voiced in print. But it could well be lurking under the surface in some cases. My answer to that objection is basically the same as yours, I think: you always want to understand why your system behaves in different ways under different conditions. So if a theoretical model, or a field experiment, or a microcosm experiment, or even just some field observations, gives you some “weird” result, you want to know why that occurred. Which may well involve repeating the work under different conditions, or with different assumptions, or whatever. For instance, Rainey & Travisano’s famous Pseudomonas adaptive radiation only occurs under certain culture conditions, or at least occurs at very different rates under different culture conditions. If you change the culture vessel size, or the growth medium, or shake the cultures instead of letting them just sit on a lab bench, you get different results. All of which is actually interesting and has been the subject of papers, because it tells us about the selection pressures and other factors that lead to rapid, (semi-) repeatable adaptive radiation under certain culture conditions.
The only caveat I’d add is that it’s arguably important to reveal our weird results as well as our interpretable ones, and to reveal if we’ve had to tweak study conditions in order to obtain a given result. I’ve talked about this in an old post.
Ha! Well, in that case Jeremy, I’d say you have a long career ahead of you! 🙂
And, yes, I agree that including all results is the best way forward… but for some journals, there just wouldn’t be the space.
Great post Britt. Thanks to you and Jeremy for tackling this issue, ie, bias against mesocosm experiments, or in marine ecology, derisively known as “bucket science”.
I agree with both of your pieces and comments about the huge value of working out key ecological interactions in simple systems in which we can manipulate stuff. I have done many a mesocosm study, mainly for biodiversity research. I also do field experiments, “mensurative” studies (e.g. epidemiology), even Pseudo meta-analyses.
My point is, I use different approaches. Sometimes because only one can really test the hypothesis I’m interested in. For example, we have done a lot of mesocosm work trying to get at the “effects” of trophic skewing, species richness and higher trophic levels, etc. These manipulations often include 20 or 30 species (crabs, fish, amphipods, algae) and obviously, you just couldn’t do this in the field,e.g. http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0036196
More generally, I think it is often valuable to address a given question (whether it is very conceptual or based on a field observation, conservation need, etc) with multiple approaches. Although I’m at heart an experimentalist, I disagree with Dan that only via experiments can “strong inferences” be made. For one, all experiments have limitations. Two; hello, climate change and smoking as a cause for lung cancer?! We can and do make strong causal inferences in both cases, based on theory and observation (not experiments).
I think I am evolving into more of a macroecologist – not to imply a progressive change – and have largely given up mesocosm studies and don’t plan to do many more field experiments. One big reason us funding limitation. The latter two are WAY more expensive (given my system is underwater and a thousand miles away). Also, and I guess this is the main point of my comment, my mesocosms studies have been utterly HAMMERED in peer review. I have gotten an endless stream of negative reviews, often for the experiments “not being representative of real systems” etc. And Jeremy, I note as an editor at Oikos, you rejected one of our better (still unpublished) experimental mesocosm papers largely on these grounds if I remember correctly. I also get mercilessly hammered at NSF for using them.
I often take extensive physical measurements IN EVERY UNIT to document light, flow, temperature conditions in the mesocosms, i.e., to compare to field measurements in a (often futile) attempt to convince reviewers that conditions are “realistic”. And we do also do EXTENSIVE field surveying to justify our chosen animal densities (both relative and absolute), composition, richness, etc. No matter how much of this we do, we always get a know-it-all reviewer arguing “those two species don’t often co-occur where I live [2,000 from our study site]” “conditions seem to warm, to light, too salty to be representative”, “there isn’t sufficient heterogeneity inside the unit to allow niche differentiation”. I could go on, and on.
We often get this from the “pro-species-richness” camp (Byrnes, Stachowicz, Cardinale, Duffy…) when our experiments find weak or no effects of richness on a given response. And this often comes from people that use mesocosms, but deploy criticism of the approach to undercut work that doesn’t support their views and conservation objectives.
But it is funny, I see attitudes like Dan’s in reviews for large scale, macroecological work, e.g., “hypotheses cannot be tested with purely descriptive data”. So maybe it isn’t that there is a bias against mesocosms, but rather, a bias against methods we are not familiar with.
Re: that old Oikos paper of yours, sorry! Afraid I don’t recall the paper or what I said about it. All I can say is, I judged it as best I could. And I certainly wouldn’t have had any pre-existing bias against it just b/c it was in mesocosms. Hope there are no hard feelings. 🙂
Actually, my own research deals exclusively with observational data so my opinion is not due to a bias against unfamiliar methods. I’ve not conducted an experiment since high school, unless you count experimenting with personal health!
My point was that Britt mentioned “trust” in statistical models used to describe observational data and I completely agreed that the leash has to be short on what you are willing to conclude from the output of such models. I don’t see what is controversial about this, or the notion that experimental studies provide strong inference while observational studies provide weak inference. I never said that weak inferences have no value. I’m just advocating a healthy respect for the uncertainty that exists with inferences made from observational data.
As for climate change and lung cancer, sure we probably have the evidence to move from association to causation. But that evidence could not have been provided by a single observational study.
Thanks for the comment, and sorry to hear that you’ve received some negative feedback about your mesocosm studies. I suppose this may come down to what you are trying to test? I’ve never attempted to simulate the natural environment (or indeed to saturate natural diversity), and so I’ve never been told my experiments are not ‘realistic’ enough. In fact, all of my work has been particularly unnatural in order to isolate a single cause and effect, as best I can.
I have, in the past, been the type of reviewer who asks for clear justification for a given environmental condition (e.g. temperature or resource level) based on what is naturally found, but this is usually when/if the goal of the experiment is to ask how some organism might adapt to future environmental change, etc… where the experiment is specifically aimed at predicting what will happen in nature. In this case, I have a problem with pushing e.g., the temperature to ‘unrealistic’ extremes and then extrapolating this backwards to smaller change. Anyhow, that’s just a particular bugbear of mine to illustrate that I am not entirely immune to skepticism of micro/mesocosm experiments.
But I completely agree with you that a combination of approaches is best! Furthermore, it sounds like you spent a good deal of time and thought on what conditions you would use and how this best mimics the natural system. I am surprised this was still met with unease.
In terms of what we can and can’t say about observational data, my point was only that statistical models can show an effect of some factor, but we need much more information (be it experimental tests or a meta-analysis of many studies) before I would feel confident in assigning a causal relationship. Indeed, I believe Fisher felt quite strongly that the correlational data could not be used to imply that smoking causes lung cancer (http://www.york.ac.uk/depts/maths/histstat/smoking.htm; although maybe he just needed justification for his habit). Of course, he was wrong, and the evidence is now clear – but it’s a nice example of how controversial implying causation from correlational work can be.
Pingback: Friday links: the need for replication, how artists and scientists present their work, and more (much more!) | Dynamic Ecology
Pingback: On the value of simple limiting cases: Lotka-Volterra models and trolley problems | Dynamic Ecology
Pingback: Stats vs. scouts, polls vs. pundits, and ecology vs. natural history | Dynamic Ecology
Pingback: Parasite biodiversity – a missing dimension? | BioDiverse Perspectives
Pingback: Ecology is f*cked. Or awesome. Whichever. | Dynamic Ecology
Pingback: The uncanny valley of theoretical models and microcosm experiments | Dynamic Ecology