What kind of scientific crisis is the field of ecology having?

I have been thinking a lot about crises in fields of science. I don’t mean “grants are shrinking” crises or “we continue to treat subgroups abominably” crises. Nor am I talking about the fact we are documenting an ecological crisis on our planet. Those are real and important. But I mean here “the science we are producing and communicating is wrong” kinds of crises. I think these crises probably say a lot about science. Both in how they managed to go wrong. And in how the crises got recognized and fixed.

I am going to list five different crises in five different fields of science (four are recent, one is old), and then I am going to ask what kind of crisis ecology is most likely to have (or is having?).

  • We got one big thing wrong (dietetics) – In the 1950s there was an alarming increase in the rates of coronary heart disease in America. A prominent scientist argued that this was due to the increase of fats in our diet. This view won out. By the 1980s nobody would have come up with any other explanation. The US government, in their dietary guidelines painted fat as unhealthy and produced guidelines that would limit its intake. The American Heart Association agreed. Thus probably billions of research dollars were channeled into researching this. In the 1960s a number of researchers also suggested that carbohydrates/sugars were the cause, but they lost. And only in the last decade has this become an acceptable hypothesis again and it seems increasingly clear that the increase in carbohydrates in our diet (a trend present in the 1950s but actually accelerated by the avoidance of fats) is also highly problematic. At least as problematic as fat. Evidence on the relative balance is not yet clear, but it is at least credible that sugar is much worse for us than fat. The definition of an acceptable research agenda, research dollars, and public policy were all off target for half a century. Oops.
  • We got a lot of little things wrong (psychology, medicine) – Insiders know this as the reproducibility crisis. In several fields researchers decided to devote some effort to actually reproducing previous findings rather than always trying to do something novel. And the end result has been that very many results, even results that ended up in textbooks were not reproducible (rates of over 50% not being reproducible). Fivethirtyeight has a nice discussion of this crisis. But unlike the dietetics crisis, these results were largely one-off types of findings and not necessarily particularly interconnected or driven by a an across-the-board agenda. And they haven’t particularly driven policy. It might be that the word “little” in my subheadings an exaggeration (some of these results got a lot of press coverage like the idea of power poses), but the things that failed to repeat are certainly not “big” either in the same sense of did fat or sugar cause the surge in heart disease deaths or of driving an entire research agenda for decades. I think what disconcerts most people following this is not the size of the findings being overturned but just the sheer volume of them. We rationally expect some reversals in science. But we don’t expect more than 50% reversals. This has led to a great deal of focus on poor research practices such as post hoc hypotheses, p-value hacking, etc.
  • We paid too much attention to the small stuff (medicine) – Medicine like every field focused on p<0.05 as a criterion for publication. But eventually people noticed that this was leading to changes in medical treatment that, yes p<0.05, but mortality or morbidity was reduced by only a few percent. This was basically an effect size problem. This is also why you would see coffee is good for you news one week and coffee causes cancer the next week – anything with a big sample size (hence p<0.05) could get reported no matter how small the effect size. Medicine reformed itself (primarily by changes to the requirements to publish in journals) and now instead of focusing on p-values focuses on effect sizes, usually expressed as odds-ratios, and confidence intervals around those odds-ratios. This of course still ends up letting one assess statistical significance (is 1.0 in the 95% confidence interval of the odds ratio?) but it places the focus where it should be on the effect size. Are 50% of patients better off or 2%?
  • We totally missed the big one (economics) – Jeremy has mentioned several time that the failure to foresee the 2008-2009 Great Recession threw the field of economics into turmoil. As Jeremy has pointed out, the creation of a blog culture in economics was one outcome – paper publication was too slow for the real-time processing needed to track this. The use of preprint servers in economics also upticked for reasons of speed. There has also been an increasing focus on empirical reality (via econometrics – i.e. analysis of data) and away from neoclassical equilibrial theories (the Great Recession was anything but a stable equilibrium). Its too early to tell if economics will be more nimble (and more able to predict black swans), but at least they’re trying.
  • It took us deaths and a genius to move past a silly idea (physics) – throughout the late 1800s physicists watched a beautiful theory, the wave theory of light occuring in vibration of an undetectable aether that was the fixed firmanent of the universe become increasingly untenable. The Michelson-Morley experiments didn’t work. The response was to post hoc hypothesize the earth having a boundary layer of aether, which kind of defeated the whole point of aether. Or to invent Lorentz contractions (which turned out to be right but nobody had any idea why they worked). Scientists also found evidence that light behaved like a particle (e.g. the photoelectric effect) rather than a wave. Despite this growing body of evidence that the aether theory of light was wrong, it remained firmly believed and taught. It took 50 years and the retirement of many senior scientists, plus a genius named Einstein to offer an alternative explanation to get back on track. This example of clinging to a wrong idea was so egregious (in hindsight) that several historians of science have held it up as an important case study. But it is hardly alone. It took 70 years for geologists to get on board with Wegener’s continental drift hypothesis.

So there you have it. Five different types of crisis for a scientific field to have. All involve sloppy thinking. Several involve misuse of statistics and p-values. But all are real. All have caused entire fields of science to get off track for years or decades. But they differ enormously in the consequences of the crises.

But let me add one more flavor of crisis that could apply to any of the five crisis types:

  • We were tools of a special interest (dietetics) – there is a special twist to the dietetics crisis (focusing exclusively on fat as an explanation for heart disease for 50 years). It happened because one big name pushed it really hard, and we are now increasingly finding out that the one big name was bought and paid for by the sugar industry (always a strong lobbying group in the US).The Sugar Research Foundation not only bought a leading name, but they continued for decades to tilt the scientific consensus in a variety of more subtle ways through their creative use of research funding. This is shades of the tobacco industry.

So. It is easy (and perhaps fun) to look at other fields and feel a sense of amusement at their failures. As that delicious German word, schadenfreude, suggests, it is human to enjoy others being discomfited. But that is not the point of this post. The point of this post is to force ecologists to look at ourselves. Is ecology in the midst of a scientific crisis that we’re just not talking about (but will be in 5 years)? If so which kind of crisis? Do we have that “tools of special interests” flavor as well? If so, which special interest are we tools of? How would you fix ecology to fix/preempt a crisis? I’ve developed a quick poll on this to solicit your opinions, but of course I hope you’ll share your comments below as well. And I will share my own opinions on this topic either in a later post or in the comments once there is a robust discussion going on.


70 thoughts on “What kind of scientific crisis is the field of ecology having?

  1. It’s very simple to see the degree to which ecological studies are addicted to one shaky (and certainly inaccurate, if not completely wrong idea: Climate change. It isn’t even clear which direction the climate is changing in many places; it clearly isn’t changing as much as the climate zealots claim that it is, but most important, the local average ambient temperature is a non-factor in the ongoing ebbs and swells of species expansions and extinctions. The direct role of man – not any indirect, CO2 mediated role – is the simple and obvious dominant factor in most ecosystems.

    Take polar bears – where current levels of arctic ice, while somewhat less that 30 years ago, are fully sufficient to support a burgeoning population. The growth of the population, however, is directly attributed to actions of mankind, especially a restraint in the use of lethal attacks on the bears.

    Any study that includes the words “with the onset of climate change…” in its introduction or preamble is confessing to have started out with a false, preconceived set of notions, and in the worst cases, the researchers are actually fudging their data in order to support those socially popular conclusions. It has been pointed out by others that in studies where the data are mainly reports of sightings, sizes, and conditions of animals, the subjective can run amok, creating a large body of essentially false literature.

    • Going to have to agree to disagree on several points about the state of climate change.

      On you core point though, I think the millenium ecosystem assessment also pointed out that land use change and over exploitation are the main current drivers of biodiversity change and climate change is more of a future change.

  2. This is such a great post. I’m totally kicking myself that I didn’t think of this.

    I wonder if your second and third bullets should be lumped together? That is, I wonder if the problem is the combination of small effect sizes and poor statistical practices.

    And I’d add a third (essential?) element to that toxic combination: vague theory. Theory that gives the illusion of being helpful (guiding the direction of our investigations, etc.) without actually being helpful because it’s too vague and flexible. Note that vague theory is worse than total lack of theory. When people are conscious that they’re exploring something totally unknown, with no idea as to what to expect, they’re more careful about not jumping to conclusions. Andrew Gelman is good on this in the context of psychology. I don’t know as much about it in the context of medicine. But my outsider’s impression is that vague theory is part of why we don’t have a great understanding of obesity and weight loss.

    Definitely don’t think ecology is in a crisis driven by special interests. There aren’t enough special interests pumping enough money into ecology for that.

    I don’t think ecology as a whole is in crisis. But I think there are research areas in ecology that sometimes go off the rails for extended periods. When they do so, I think vague theory is often a big contributor. There aren’t any zombie ideas about stuff ecologists have good quantitative theory about. Vague, generally-applicable theory like the intermediate disturbance hypothesis gets a lot of ecologists interested in the topic, while simultaneously also misleading them into thinking they’re doing effective hypothesis testing. Vague theory is a trap, like a low-quality habitat patch that organisms select in the mistaken belief it’s a high-quality patch.

    Quibble re: the example of economics: economics actually gives us good reason to think that recessions can’t be predicted. I agree that that was a crisis, but I don’t know that it was crisis of forecasting.

    • I agree there is definitely overlap between the 2nd and 3rd (lots of little things wrong and focusing on little things). They both come from over-reliance on p<0.05 to do our thinking. And if we filtered out only focusing on things with large effect sizes they would go away/not happen.

      But I think they could play out differently in ecology. In particular in a multicausal world like ecology (or psychology) there may never be things with big effect sizes but there could be things with small effect sizes that are reproducible.

      Have to agree with you about theory as a key ingredient as well. Although as the last example shows, theory is not a cure-all (and is even sometimes part of the problem) as well.

      Off the cuff thought. Will ecologists 100 years from now wonder why we spent 50+ years obsessed with simplistic Lotka-Volterra competition models that so clearly fail to make predictions about what is going on (there is never a single species unless humans invest huge energy to keep it so).

      • I’m not so convinced. But only time will tell. I’ll bet you a beer at ESA 2117!

      • Thinking about it further, I think there’s an interesting post to be written on what it would look like to study a multicausal world *without* taking simple unicausal cases as a starting point. How do you study a complex, multicausal world *without* trying to build up from empirical studies considering one or two causal factors at a time, and from theory that starts with simple limiting cases? I don’t have an answer to those questions, but maybe you or someone else does?

        Because I’m me, I’m thinking of a very good paper addressing an analogous issue in a different discipline. Political philosopher Jacob Levy argues that there’s no such thing as “ideal theory” (e.g., Rawls’ theory of justice): https://www.cambridge.org/core/journals/social-philosophy-and-policy/article/div-classtitlethere-is-no-such-thing-as-ideal-theorydiv/D93052C9D7CC52A54A26C9D34AACB6B5. He argues that you can’t derive from first principles what an “ideal” society or political order would look like, and only then talk about the inevitable compromises and imperfections of our imperfect real world. It’s like trying to treat frictionless models from introductory physics as an “ideal” theory of aerodynamics (Levy’s analogy, not mine). Aerodynamics is fundamentally *about* the frictions that intro physics assumes away. Analogously, political theory is fundamentally *about* moral and political “frictions”. You can’t figure out what the ideal political order is in a world with moral and political frictions by first assuming away the existence of those frictions.

        William Wimsatt has made an analogous argument in philosophy of science that I agree with. Arguing that there’s no point to theorizing how hypothetical idealized rational human beings should do science, because that doesn’t help real, non-ideal people with all their cognitive limitations and biases figure out how to do science as best they can.

        I’m still mulling this analogy over. Part of me still wants to say that the frictionless models of intro physics play a useful conceptual role in aerodynamics, in that they reveal the importance of the frictions that are central to aerodynamics. I think that’s the same conceptual role that simple models like Tilman’s R* model play in ecology.

      • To be clear I think Lotka-Volterra competition as an idealized model had value. I just wonder if we haven’t wrung most of the value out of it in 80 years and ought not to be moving on to the more multicausal world in which it actually exists.

        To me it is almost tautological that idealized simple models should have idealized simple implications that should emerge pretty quickly.That’s important. But its also a call not to dwell forever on them and move on to handling more complexity.

    • I would agree that vagueness has a tendency to hobble theory, but at the same time I would argue a certain degree of vagueness is required to allow for the testing of predictions by the theory.

  3. Thanks Brian. A good read as always. Some of the crises you refer to have one common trend; they were built on fear. Fear of getting sick, fear of losing money, fear of catching a rare (but significant!) disease, or simply fear of change. The problem with fear-selling science is that it always gets you running. The scarier the implications are (or the big name behind them) the further you may run even though the effect size is small. Ecology is a science that nowadays sells fear in many different flavors. Too warm, too cold, too turbid, too many, not enough… . Note that I used the verb “sell” on purpose, because it reflects on the question I was originally asked about a year ago: “What do you guys sell?”

    This is not necessarily the case with other disciplines, like anthropology, archaeology, astronomy or physics. They do “sell” something (e.g., new technologies, answers to metaphysical questions or nature’s big unknowns), but more rarely do they sell fear. If ecologists are to sell fear, they also have to sell faith in humanity, in the resilience capacity of ecosystems, or things along those lines. Maybe selling fear is just not the way ahead for ecologists. Indeed, ecological threats fall low on the list of fears. Fears of getting into war conflicts, getting sick, catching a fatal disease, losing its job, losing its house, or having to move (migrate) are miles ahead in most peoples’ mind.

    Environmental drivers are changing and there is a list long of potential ecological threats coming along. The scientific crisis in ecology is perhaps about getting beyond “flag raising” and “hand waving” types of behaviour. For example, many decision makers and land managers consider that human demography is the “the elephant in the room”, yet we only rarely discuss the issue in ecology.

    • I agree very strongly with this assessment about the dangers of fear-based science.

      A further consequence of fear-based science is that criticism of dire narratives becomes strongly discouraged. I recently shared an article with a colleague about Doug Erwin, who like many other paleontologists, is critical of characterizations of the present biodiversity crisis as the “sixth mass-extinction” (https://www.theatlantic.com/science/archive/2017/06/the-ends-of-the-world/529545/). My colleague’s response was that Doug was being “irresponsible” in attempting to question in any way the severity of the present day biodiversity crises.

      I think that attitude is increasingly prevalent in many areas of environmental science.

      • Re: the argument that certain scientific results should be subject to a higher burden of proof, or even suppressed, because they seem to lead to politically-unpalatable conclusions, see these old posts:


        I agree that scientific results shouldn’t be treated any differently just because they seem to support policies that some or even many scientists dislike.

        See also Fahrig et al. in press (https://carleton.ca/fahriglab/wp-content/uploads/Fahrig-Chapter-5-Kareiva-Marvier-2017-in-press-2.pdf), who show that 3/4 of studies that find significant effects of habitat fragmentation on biodiversity find positive effects–but only 40% of those studies describe the results that way in their abstracts. Studies finding positive effects of fragmentation on biodiversity also sometimes emphasize that their results shouldn’t be extrapolated to other systems–but studies that find negative effects of fragmentation on biodiversity never emphasize that. I think the Fahrig et al. paper highlights an underappreciated value of systematic reviews and meta-analyses: they’re somewhat harder to “spin”. Though not impossible to spin. Especially if–as is usually the case–the underlying data do not comprise a random sample of all possible study systems.

      • Thanks for pointing out those really great blog posts. Happy to hear other ecologists feel similarly about some of these issues.

        Unrelated, I hadn’t seen the Dornelas et al. 2014 paper before. Great result! Anecdotally, paleontologists tend to find similar results that alpha diversity remains relatively stable across extinction boundaries and all of the action happens at the beta level. This paper on trilobites at the end-Ordovician (2nd largest mass extinction) might interest you: Adrain et al. (2000) You have access
        Silurian trilobite alpha diversity and the end-Ordovician mass extinction.

      • re: Jeremy, it would be interesting to see how many studies, for example, find that development and sprawl are the major cause of biodiversity or species loss, but emphasize the much smaller climate change factor in abstracts and related press releases.

    • Raphael I agree with about everything you said. I think in a nutshell, you could say ecologists (at least in my areas) are selling that fear of a biodiversity crisis (which is real and genuine and matters a lot to most of us), but then fear is not hitting the radar for most everyday citizens, so we’re in a vicious cycle of amping up the fear even more. Andrew – I think the 6th major mass extinction is a great example.

      We might have a lot more luck selling e.g. adaptation (here’s what humans can do to minimize impacts and coexist) than fear.

      I suppose that could be a 6th type of crisis – studying things nobody cares about and not studying things the general public cares about (and no that is not a basic/applied distinction – the public seems to care about the Hubbell telescope and Higg’s bosons).

  4. ” All involve sloppy thinking”.
    I dont think it correct/useful to label beliefs that are eventually replaced by better explanations as being sloppy thinking. It does not describe the process of science at all, and I can guarantee you that , for example, the folks against continental drift had real objections, that were only overcome when plate tectonics was discovered. A participant in the discovery of quarks described a lunch with Murray Gell-Mann where murray laid out the big objections to a composite model of elementary particles, and this before Murray himself proposed such a model; the correct one! At any moment one chooses study material in the face of great uncertainty as to what will work out.This is how science works.

    • I agree with you basically. But I didn’t say that any reversal of beliefs represents sloppy thinking. Indeed as you say, that is a hallmark of good science. But all of what I identified as a crisis (and labelled as sloppy thinking) went further. They got too hung up on p-values instead of thinking. They let a big name (and a special interest group) steer a field for 50 years in a way that wasn’t evidence based. They clung on to a theory with no real empirical evidence for even as the empirical evidence against mounted.

      • Hi Brian; you misunderstand me, and got it backwards.
        The history of physics is full of events that overthrew previously held beliefs; But people who held the previous beliefs are not guilty of ‘sloppy thinking’. Regardless of what historians of science say, physicists themselves do not accuse each other of such acts of sloppy thinking.Einstein did indeed revolutionize how we think of space and time; his predecessors were not simply sloppy thinkers.

      • There is an interview with Murray Gell- mann somewhere on the WEB which stretches over a cpl hrs,[ and the interviewer is Geoff West, of metabolic ecology fame]. In the interview Murray discusses many of the barriers to his own insights; beliefs ,theory, and indeed data, that stood in the way of guessing the correct direction foreword. Well worth listening to.

      • I’m not sure we’re so far apart. My sloppy thinking comment was mostly addressed at some of the other examples. I will stick by my claim that repeatedly obsessing on p<0.05 when effect sizes are small and or the hypotheses are chosen post hoc is sloppy thinking.

        As for the 5th category, as I said, I agree that a willingness to reverse thinking is a hallmark of good science. That said, the incident around aether and Michelson-Morley was rather remarkable – remarkable enough that multiple philosophers and historians of science have felt the need to understand it. It was not like your example of Murray Gell-Man where an individual scientist speculated something couldn't work and then within a decade of two saw it differently. It was an entire field clinging to something that never had a shred of empirical evidence, just a certain intellectual elegance, and then clinging onto it for decades, almost generations, in the face of mounting contrary evidence.

        I certainly have plenty of examples of where I have reversed my own scientific opinions and I hope others don't look back on them as sloppy thinking. They often involve expressing intuitive opinions and then revamping in the face of new data.

        Regardless of how we choose to label that one incident of aether (and I don't feel a strong need to label it, just it was rather remarkable), we both agree that science and scientists need to feel free to change their minds without that being perceived as a bad thing.

  5. Hmmm, interesting poll, though it could be argued that the multiple guess format results in ‘leading questions’. One big issue about small things is ecology’s inability (seen in many studies and the subject of at least two papers) to explain truly meaningful amounts of variation in the subjects of interest. I mean, call me old fashioned, and I know R-squared values aren’t everything, but if I see a scatter plot that looks like the big bang with a ‘significant’ R-squared of, like, 5%, I naturally wonder what happened to the other 95%.

    We may also be experiencing a crisis of replicability too – rather like the psychology folks. how may studies actually get replicated?

    Finally, my own reading and research seem to imply that one of the real big recent stories in ecology is the role of individual variation and microenvironments in determining species responses to abiotic variables. This is like ly behind a lot of the feeble variance explained, but I have not found any monographs or books on the subject.

  6. “Medicine reformed itself (primarily by changes to the requirements to publish in journals) and now instead of focusing on p-values focuses on effect sizes, usually expressed as odds-ratios, and confidence intervals around those odds-ratios.”

    Hear ye! Hear ye! Hear ye! Come one, come all and get your confidence intervals! Use ’em early, use ’em late- please, do not hesitate!

    Yes, confidence intervals. Run Jane Run. See Jane Run. Run up the hill for a bucket full of confidence intervals. Give the extras away to your friends! Personally, I love confidence intervals and would not be caught dead without them. But please do not only use them at the end of your study… but also at the beginning, when you should be estimating the sufficiency of sample and sub-sample size relative to your non-post hoc hypotheses.

    Yes, ecology has a crisis- and it is one that has persisted across generations: The ignorant, inappropriate and superfluous application of statistics. This singular problem has plagued ecology for its entirety. I am 56 years aged and began my career in ecology at the tender age of 19, when I declared ecology and chemistry as my undergraduate majors. Every single ecologist I worked with from that time forward, at least when it came to testing actual data, not once bothered to check if the key assumptions of the statistics they were using were obeyed by their data… Every one of these ecologists managed to become published… And of these ecologists who also taught ecology and/or statistics- everyone of them stressed the importance of adhering to statistical assumptions in the curricula…

    Yes, ecology has a frigin crisis.

  7. Thanks for the post, Brian. It seems to me that evidence suggests crisis 2 is common in some (many?) sub-fields of ecology / evolutionary biology (http://dx.doi.org/10.1016/j.tree.2016.07.002). I also think that crisis 1 can emerge from crisis 2, especially when combined with imprecise theory (as pointed out by Jeremy above). A few years back I published a case study of crisis 2 problems accumulating, in the shadow of vague theory, to produce the illusion of a robust body of knowledge in behavioral ecology (http://dx.doi.org/10.1111/brv.12013). My case study was just about a single ‘well-studied’ species, but this single species appears representative of a massive body of research on the behavioral ecology of sexual signals.

    • Thanks for the great links. The 2nd one blew my mind. 40% of the studies didn’t even report an effect direction or size so they could be reused in a meta-analysis! That’s certainly a crisis of my type 2.

      And I wonder if anybody has ever done a meta-metaanalysis to see how many meta-analyses that looked for a publication bias found one. I’m sure the fraction is very high. 90%?

      • Good question. I don’t know of one off the top of my head. I suspect that there would be variation in publication bias across sub-fields as a function of a variety of factors.

      • By the way, do you find the evidence presented in this other paper (http://dx.doi.org/10.1016/j.tree.2016.07.002) to be less than compelling? As the author, I have some opinions about where it would be nice to have more data on the topic of reliability of literature in ecology, but if you have the time, I’m curious to hear your opinions. Thanks.

      • @Tim Parker:

        Ok, I skimmed the paper. Seems a bit of a mismash to me, lumping together incomplete or selective reporting of results with other unrelated issues like low powered studies. I mean, it’s fine, all of the issues the authors review are worth thinking about. I’m just not sure they all fit neatly under one umbrella. Or maybe I’m being too narrow and not seeing some connections among issues that I should be seeing.

        I think the evidence for problems on some fronts is stronger than on other fronts.

      • I agree that the evidence is variable in strength, though there are a growing number of people trying to improve our empirical understanding of bias / insufficient transparency in ecology.
        For some background, we wrote the TREE paper (http://dx.doi.org/10.1016/j.tree.2016.07.002) in response to an explicit request for a compilation of existing evidence that a lack of transparency was a problem in ecology and evolutionary biology. I suppose it could be seen as a bit of a mishmash as you say, but that’s because the evidence of problems came in many forms and from disparate sources.

  8. In the vector control world (specifically malarial mosquito control) the form of insecticide resistance that is thought to be most troubling in metabolic based resistance (10-15 years ago it was target site resistance, before metabolic resistance started to be really tracked). But it’s not considered that behavioural resistance is that big an issue (or will be…?). But that is more because measuring behavioural resistance can only be done in the field (rather than seeing changes in gene frequency or LD90), and as no one collected the data before (no baseline data) and no one collects it now, behavioural resistance is never going to be considered an issue as it can’t be shown to be an issue.

    Perhaps a very niche ecological/epidemiological crisis.

    • That’s a nice example. I’m beginning to think there is probably a whole separate set of crises (or maybe a 6th type of crisis with several subtypes) related to not doing the research society needs.

      I think not doing something that is needed/important because it is harder/not a path that has been blazed yet is a fairly general and common theme in ecology.

      • I agree, ecologists’ tendency to do what’s easy rather than what’s needed holds the field back. But of course, there are good reasons to do easy things rather than hard things. I think one way to partially square this circle is to focus more on model systems–systems in which there’s a match between what we’d ideally like to do and what it’s tractable to do (https://dynamicecology.wordpress.com/2012/10/18/ecologists-should-quit-making-things-hard-for-themselves-and-focus-more-on-model-systems/).

        Though of course, for some questions there’s a risk that model systems give you an answer that’s an unrepresentative or misleading guide to the answer you would’ve gotten had you somehow been able to conduct the same study in an intractable non-model system. The linked post has some discussion of how to deal with this problem.

      • Thanks for the blog post and to everyone for the discussion. This point about ecology dwelling on little stuff and missing the big problems is what drove my answers to the poll. Has anyone else ever walked out of a conference session on your favorite study system / organism because it seems like we’re still digging the same ditch? Has anyone else worried that what you do will add up to arcane papers fully read by only a handful of people? If so, you know what I mean – not just a matter of personal ambition, but a frustration that all “this” may matter but little to society and science. How to solve that? My only answer is to seek each of our scientific joys, with an eye to relevance, and hope for the best in this great, diverse, paper churn called ecology. And let your scientific joy (muse?) stray onto new paths, because fresh approaches at the intersections between disciplines can have great impact.

  9. (Sorry, I’m about to bring funding back into the discussion). The very fact there aren’t more funding sources could increase the probability we’re suffering from “We were tools of a special interest”. Not industry, per se, but e.g. NSF’s focus on paradigm shifting and/or novel research, or managers’ focus on applied biology. Maybe even publishing’s preference for large and positive effects. All of these are going to narrow down the questions and hypotheses (and acceptable solutions?) that are being addressed.

    • Definitely true that funding creates the incentive structure that drives much. I would say that broadly funding over-incentives quantity of publication which can certainly contribute to some of the crises of focusing on the small stuff and getting lots of little things small. Whatever funding agencies say, I think they also reward conservative science (a majority of my proposals have been rejected with lines like “too ambitious” – its an open secret that you have to have “preliminary data” that gets you half way through what you are proposing to do to get funded). Not sure if that fits one of my crises or not.

    • If a given discipline’s funding is managed by a select group of gurus (study groups etc), then ubetcha, the vast majority of potential avenues of inquiry are snuffed out of existence. There is no means possible to avoid that outcome short of being truly open and unbiased. Problem is, the people who become entrenched within these exclusionary processes have no realization that they behave in this manner.

      This is why, for example, that a ragtag group of video gamers were able to model the three dimensional structure of a retroviral protease in just three weeks. World-renowned scientists failed t generating a singular valid model after 15 years of research… (http://www.huffingtonpost.com/2011/09/19/aids-protein-decoded-gamers_n_970113.html)

      Freeing oneself of the traditional funding mechanisms also frees one of these deeply entrenched biases. I can tell you from personal experience that such freedom is priceless.

      • Elliot, it’s very unfair to describe NSF program officers, the associated review panels, and the ad hoc reviewers as “a select group of gurus”. The NSF process may have its biases, for instance towards less risky research. But it is not run by a cabal of insiders who exclude everybody else or shower money on only a few favored lines of research.

        As illustrated by your own chosen example. It’s also disingenuous to describe the video gamers as the ones who figured out (some of) the protein structures they were challenged with. Biochemists designed the game. And they did it with funding from NSF, as well as other government sources. So your own example contradicts your claim.

      • Well, we can nitpick an argument to death, really. Perhaps Thomas Edison said it better than I did. While I am generally open to counter arguments and lengthy debate, the “system”, whether we are talking NSF, NIH or what have you- is, by design, highly restrictive and highly biased. It cannot be any other way. In this instance, Jeremy, I would say that as an academic, you are unable to see the forest through the trees. I do not mean that as an insult, but the fact is, you are unable to perceive the effect as a person so ingrained to the system. I speak from experience… as a person who benefited from millions of dollars of federal funding over decades of work. Simply enough, you cannot see the bias until you have tried it another way. No one can.

      • @Elliot:

        If you don’t want your argument “nitpicked” (by which you mean “having your self-contradictory example pointed out to you”), make a better one. I’m happy to follow up any link you provide and consider it on its merits. That’s how I figured out that your chosen example contradicted your claim–by following up the links you provided. I expect you to show me the same courtesy and respect. Rather than telling me that I’m biased and it’s impossible for me to become less biased just by virtue of the fact that I’m an academic. Or telling me that I should just take your word for it because of how old you are and how many millions of dollars of federal funding you’ve received.

      • @Elliot:

        Here’s some discussion of a natural experiment at NIH back in 2009 that used stimulus money to fund a bunch of grant proposals that wouldn’t otherwise have gotten funded (https://dynamicecology.wordpress.com/2015/04/24/friday-links-grant-review-is-not-a-crapshoot-and-more/). Regular proposals actually had higher variance in their eventual impact than stimulus-funded proposals, which is inconsistent with the idea that federal agencies only fund the safest science. And stimulus-funded proposals were more likely to be led by experienced PIs than were regular proposals, inconsistent with the idea that federal funding agencies are closed cabals.

        The same post also links to and discusses a Science paper using a different and much larger NIH dataset and different analyses to reach broadly the same conclusion.

        In the comments Brian and other commenters point out some limitations of those studies, and some reasons why the results might not generalize perfectly to NSF. Personally I think the results of those studies are consistent with federal funding agencies having some tendency to favor “safe” science. But they’re inconsistent with the extreme view that federal funding agencies are a closed cabal.

      • Hmmm, well, please understand that I do not prefer toengage in tit-for-tat dialogue, Jeremy. I say this at the outset because I believe there is a potential for our discussion to be characterized that way. But since you asked, I will provide an answer. Again, though, understand that I think my argument on this topic is pretty much self evident. The mechanisms concerning public funding for science are by design highly exclusionary, restrictive, biased and discriminatory. Note, however, I have not attached a value judgement to this observation. I am simply calling a spade a spade, and not saying if it is good, bad or otherwise.

        Consider the quote from the following article:(https://www.timeshighereducation.com/news/many-top-scientists-did-not-have-first-says-study)

        “The findings have “important implications for decision-making when offering PhD places and research jobs” as “clearly degree grades are not a reliable indicator of research ability and potential”, he said.”

        If you read through this article, you will discover that many of the top-performing scientists (among them, ecologists) in the UK would not qualify for federal funding in the US. Their chances of receiving public funding in the UK do not appear so bright either. Now, consider the following persons (to name just a few) who would not qualify for federal funding, were they alive today:

        Leonardo da Vinci
        Antonie van Leeuwenhoek
        Ben Franklin
        Michael Faraday
        Thomas Edison
        Charles Darwin
        Gregor Mendel

        I could go on… and on… and on, Jeremy. I do not however wish to engage in a tit for tat so I am letting it go, as it were. But suffice it to say, that to allege our public funding mechanisms are not exclusionary, restrictive, biased and discriminatory is akin to saying the Earth is flat…

      • Um, I’m not sure what Charles Darwin’s undergraduate record has to do with the question of whether or not US federal science agencies are run by a highly risk-averse cabal. Yes, absolutely, there are biases in graduate school admissions, and some commonly-used admissions criteria are poor predictors of success in grad school. We should fix that (e.g., I think this is a fabulous idea: https://dynamicecology.wordpress.com/2017/08/23/introducing-eebmentormatch-linking-students-from-minority-serving-institutions-with-grad-school-fellowship-application-mentors/) But isn’t bias in grad school admissions a different issue than the one you were talking about earlier? Or do you see the biases in US federal science funding and in grad school admissions as two symptoms of the same underlying problem?

        I think it’s best if we just leave it here.

      • Well, maybe we got off on the wrong foot concerning our debate? The chain of public funding, as I have understood it for a long time, is that focal areas are determined by a relatively small group of senior scientists- most of whom are from top-performing academic institutions. These focal areas eventually are represented by study groups, who in turn review proposals and reward funds.

        I am not saying it is diabolical or nefarious in any way. I do not believe those motivations are present. My point is, the vast majority of scientists doing science will never qualify for these funds simply because of their educational experience (or lack thereof) and/or the institutions with which they have become affiliated. Put another way, a proposal submitted by someone with a MS degree working at Podunk U might be a great proposal, but it probably won’t get funded. But submit the same proposal from Professor Glorious at Bling U and…

        Again, I am not attaching any value judgment to the current system of public funding. My point is that only a tiny fraction of scientists working in the US actually would have even a remote chance of acquiring public funding.

    • This post was thought-provoking, but I’d argue the poll options are a bit misleading. Any good scientist should be wary of answering “No chance” to most questions. Even P<0.000000000000001 means there's still a chance. Yet "perhaps" can be interpreted as "there's a good chance".

      • You may be right that I worded extremes for the yes and no options (No Chance, Definitely Yes) which pushed people to the middle (Perhaps) option which is always a safe place for scientists. Yet we got a good number of No Chance and Definitely Yes answers. In any case, I promise not to represent this as a rigorously designed poll! Hopefully people had fun answering it.

  10. Really nice and thought provoking post and comment thread. Something I’ve been wondering is how we determine if we’ve reached crisis level? Or put another way, how can we tell the difference between a true crisis and some more minor issues that slow progress but don’t prevent us, overall, from drawing sound conclusions from good science. I suspect lots of us can think of real or plausible examples of several of the problems you present. But especially for “paid too much attention to the small stuff” and “got a lot of little things wrong,” the problem has to be rather prevalent to derail the entire field. Especially considering, as has come up in other recent posts/comment threads, that ecology arguably isn’t a particularly coherent discipline. Maybe sub-fields within ecology, and not the discipline as a whole, are where we would be more likely to see these sorts of crises play out.

  11. Pingback: Links to share | standingoutinmyfield

  12. Um, am I the first to mention. R.H. Peters A critique for Ecology? Long time since I read it, but my recollection is that Peters identified some elements of crisis including untestable concepts masquerading as testable theory and a need for ecology to answer more directly society’s questions. Not that he was the first to do this either.

    • First to mention him on this post. He certainly raised some important issues. Most of the potential “crises” I identified were on the empirical/testing side. Peters work as you say largely looked at the theory side. But of course the two interact. I really agree with Peters on some points and really disagree on some others.

  13. I think the “we continue to treat subgroups abominably” crisis, horrifying on its face, *also* leads to a “the science we are producing and communicating is wrong” crisis.

    By systematically excluding segments of the population from science, we enshrine the biases of those privileged enough to remain.

    How can the resulting science produced and communicated be right?

    • With respect, I don’t think the biases of those who’ve been fortunate enough to have made a career in academic ecology are to do with the sorts of issues raised in this post. For instance, I don’t think there’s any association between the gender or ethnicity of the investigator and the sample size of the studies they conduct.

      • I wouldn’t expect that association either, but my sense is that the “sloppy thinking” crises Brian describes are more deep-seated in the scientific process than your example implies.

        For instance, I do expect there to be an association between our identity and the questions we ask, the hypotheses we generate, and the discussions of our results.

        Crisis 1 is about hypothesis generation.
        Crises 2 and 3 are about how we interpret, contextualize, and discuss our results.
        Crisis 4 is about what questions we ask.
        Heck, crisis 5 arose *because* there was overrepresentation of a particular perspective.

        Poor representation of multi-dimensional identity amongst practicing ecologists is a crisis because, at best, the resulting science is distorted– it is incomplete in a non-random way.

      • Hmm…I still don’t quite see it for the crises listed in the post. Saying that, *in general*, there might be some association between personal attributes and questions asked, is different than showing that there’s an association in any particular case. I think in some particular cases there is an association, and that in others there isn’t.

        I’d expect associations between questions asked and aspects of personal identity to show up most obviously when the questions bear on the personal identity. Joan (previously John) Roughgarden’s work suggesting alternative hypotheses to sexual selection and sexual conflict is an example from EEB. But taking the examples from Brian’s post that I know the most about, I haven’t seen any suggestion from anyone of any association between any attribute of personal identity and the replication crisis in psychology. For instance, the psychology studies that Andrew Gelman critiques have included work by both women and men. Nor have I seen anyone suggest any association between any attribute of personal identity and economists’ failure to foresee the Great Recession. It’s my impression as an outsider that economics as a field does have some blind spots and biases that are due to the field being quite male-dominated. One could argue that those blind spots and biases represent a crisis for economics. But as best I can tell, failure to foresee the Great Recession wasn’t anything to do with those blind spots and biases. And while I don’t know as much about it, I don’t really see any reason to expect physicists’ personal attributes like ethnicity, gender, etc. to shape their views on aether theory. Just because a particular perspective (here, belief in aether theory) was common in the field, and that certain personal identity attributes (e.g., being white and male) were also common in the field doesn’t mean the latter was a cause of the former.

        My impression from reading biographies of scientists is that there are connections between scientists’ personal attributes and personal lives, and their scientific work. But those connections often are quite complex and difficult to understand and untangle, and further are differently complex for different scientists.

        I completely agree that it’s a good idea in general for a field to be staffed by people with a diverse mix of personal attributes and backgrounds, as a hedge against the risk that the field will become over-focused on an over-narrow range of questions, or an over-narrow range of candidate answers to those questions. I’m just quibbling over whether the examples in the post illustrate that general point. Just because a field lacks that diversity of personal identity on one or more dimensions doesn’t necessarily mean that it will be overly-narrowly focused. Conversely, just because a field is diverse on all dimensions of personal identity doesn’t guarantee that it won’t be overly-narrowly focused. It’s a case-by-case thing.

      • A further thought: when I think about the blind spots and biases of ecology as a field that might well be due to ecology’s lack of diversity on some personal dimensions, I think about things like the relative lack of interest in urban ecology (though I think that’s starting to change). Another example is the veneration in some circles for field work conducted in remote locations by investigators spending long periods away from family. Terry McGlynn has an old post on how “field station culture” in ecology is in part a reflection of a time when it was widely considered normal and ok for men to just leave home for months at a time, leaving their wives at home alone to raise their kids.

      • My apologies. I wasn’t being clear; I agree with your first 3 paragraphs. I’m not saying that poor representation of humans amongst scientists led to the crises listed in the post; I’m saying that poor representation of humans amongst scientists should be listed as its own crisis because it also leads to wrong science (by similar mechanisms as the listed crises).

        I’m confused by your 4th paragraph. It seems like you start by agreeing that the humans practicing a particular discipline should be diverse to prevent narrow focus of that discipline, but then it seems like you think that may not be necessary:

        “Just because a field lacks that diversity of personal identity on one or more dimensions doesn’t necessarily mean that it will be overly-narrowly focused. Conversely, just because a field is diverse on all dimensions of personal identity doesn’t guarantee that it won’t be overly-narrowly focused. It’s a case-by-case thing.”

        I totally agree with the second statement, and totally disagree with the first. If we know there is a lurking correlation in how scientists do ecology (arising from overrepresentation of some axes of identity), and we know those correlations have biased the field in some way (given the examples in your follow-up note and a couple more below), why wouldn’t you expect the lack of diversity amongst ecologists to be broadly distorting to the science produced? I think it is reasonable to expect it to be distorting enough to merit attention as a crisis.

        I agree with your follow-up note and would add the treatment of an area’s condition prior to EuroAmerican invasion as a “natural control”, geographical biases beyond venerating remoteness, and an example of researchers studying a particular fungus and coming to different conclusions about the purpose of its smell depending on the investigator’s background (European researchers were testing hypotheses about the smell warding away enemies because they found it repulsive while Japanese researchers were testing hypotheses about the smell attracting beneficial organisms because they found it comforting– I’ll find the reference).

      • Ah, ok, got it Michael. Seems like we’re on the same page. Thanks for taking the time to clarify.

        I’d be interested in that story about the fungus odor if you’re able to dig up the reference. That sounds very interesting.

  14. The loss of natural history expertise, collections and historical data constitutes a crisis in my book. Too much emphasis on modeling and theory, far too little on data preservation and curation. Definite problem. Once it’s gone, it’s gone.

    A second one would be the continuing piecemeal approach to field data collection instead of coordinated, large scale, multi-taxa monitoring and inventory systems. With some exceptions, e.g. FIA data, remote sensing data.

    Good post Brian.

  15. Pingback: A conversation: Where do ecology and evolution stand in the broader ‘reproducibility crisis’ of science? | Transparency in Ecology and Evolution

  16. Pingback: Happy 6th birthday to us! | Dynamic Ecology

  17. I think the crisis in ecology goes as deep as it can go: ontology. Nature is a stupid concept, as it requires the existence of artificiality. What is artificial? Is plastic artificial? Why, when it is made from natural products by natural beings using tools of natural origin? It only seems complex and elaborate because we are using a human intelligence frame of reference.

    If we remove the concept of nature, suddenly we start to manage ecosystems with humans as another species. We would talk about population sizes rather than dismiss any anthropogenic effect as un-natural and therefore an aberration. And most importantly, we would avoid the de-facto ecofascism we are going into by not geoengineering our way out of climate change.

    I know these are controversial opinions, but the idea of nature is fundamentally anthropocentric. Under current ecological doctrine, we are species expelled from nature, mirroring the Christian story of the fall. Like Slavoj Žižek says, ecology (I believe he meant environmental activism, but the distinction is blurry) is more and more taking the place of religion as that which stands in the way of progress. It is the new opium of the masses.

    Perhaps these are extremely controversial opinions. I must say I found it funny how most people who responded to the poll think the problems in ecology are either a collection of small errors or focusing on small stuff. It’s basically saying you’re not perfect, but got most of it right. Is ecology in denial? I think climate change will force ecology to reckon with its assumptions and the extreme ugliness of human conceit and xenophobia that underlies them.

    Perhaps these opinions are too controversial to be published. But I believe in a few decades we will have another example of how history repeats itself in insidious ways, and from it will come an ecological theory that is compatible with us becoming a class II civilization able to survive geological timescales. You just have to extend current conservationist doctrine into the millions of years to understand how misguided it is.

Leave a Comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.