Last week I polled readers on whether they shared my impression that general ecology journals only want to publish “realistic” theory, meaning theories tightly linked to data. I also asked readers if they thought general ecology journals should only publish realistic theory.
The answers were loud and clear: yes to my first question, no to my second.
We’ve gotten 102 responses as of this writing (about 24 h after the poll went up), and from past experience we know that the results won’t change much since most responses come in the first 24 h. It’s not a random sample from any well-defined population, obviously. But it’s large enough to be more than anecdotal, I think.
Respondents were a balanced mix of ecologists who primarily do empirical work (37%), theory (29%), or a mix (32%).
Almost everyone either shares my impression that general ecology journals (besides Am Nat) only want to publish “realistic” theory (43%), or isn’t sure (48%). Only 8% disagree with my impression.
Only 10% think general ecology journals should only publish “realistic” theory. The vast majority (80%) disagree. Another 9% aren’t sure.
Looking at the crosstabs, those who think that general ecology journals only want to publish realistic theory skew towards theoreticians (39%) and people who do both theory and empirical work (41%); only 20% are empiricists. Those who said “not sure” are disproportionately empiricists. And most (8/10) people who think that general ecology journals should only publish “realistic” theory are empiricists. The other 2/10 do both; none are theoreticians.
As discussed in the comments in the previous post, it’s not actually clear if general ecology journals are in fact only interested in publishing realistic theory. It might be a case of author perception becoming reality to some extent. And not all unrealistic theory is created equal; some of it really isn’t of wide interest to ecologists (the same is true of any sort of work, of course). See the excellent comments from Andre de Roos, a theoretician and an editor at Ecology, for what he looks for in theoretical papers submitted to Ecology. But even if general ecology journals only have a perception problem, I think that’s still a problem. You don’t want authors seeing you as unwelcoming to papers that you’d actually welcome.
Not sure what can be done about this. But the fact that most ecologists don’t like this perceived state of affairs would seem to provide an opportunity. A general ecology journal that manages to convincingly signal its receptivity to good theoretical work might reasonably expect to start attracting more of it–work that would otherwise go to specialized theoretical journals. That could be an attractive proposition to both the journal, and to the authors, who presumably want to reach a broad audience*. Convincing signals might include running special features on theoretical work, and publishing theory papers from the journal’s editors.** Andre for instance notes that he publishes his theoretical work in general ecology journals, including Ecology.
*Of course, insofar as people doing “pure” theoretical work see their audience as comprising other theoreticians, they’re going to keep submitting to theoretical journals whether or not they see general ecology journals as receptive to “pure” theoretical work.
**Not that any journal wants to be a house organ for its editors, obviously. But if the theoreticians on the journal’s own editorial board don’t see the journal as an outlet for their own work, why should anyone else?
Hi Jeremy,
This is all interesting stuff, so thanks for bringing it up. Having read through the original post and the associated comments I’d like to pick up on a point Brian made about a lot of published theory never being developed with the intent of being tied to reality. As an empiricist, albeit a relatively green one (my defence is tomorrow!), I’m not so much concerned with a theory or model being realistic, but whether its assumptions can be tested. Your example of the Rosenzweig-MacArthur model is a case in point. It’s clearly not a realistic approximation of consumer resource dynamics, with famously paradoxical behaviour, but we can test it, and the mismatches between predictions and ‘reality’ (read: observations) can tell us something. Empiricists surely want testable models first and foremost — and we should relish the chance to test new ones — but whether they are realistic or not is a minor consideration. Perhaps this is why your poll suggests that most ecologists, empiricists and theoreticians alike, don’t really think that theory should have to be realistic.
That’s not to say that theoreticians should restrict all models to what empiricists can currently test, but testing should at least be conceivable. I mean, if scientists in other fields can land probes on snowballs hurtling through space, for example, empirical ecologists can put some animals in pots and see what they do.
Danny
“my defence is tomorrow!”
A student whose defense is tomorrow is commenting on blogs! This either means there’s hope for the future, or that the apocalypse is coming. Can’t decide which. 😉
In seriousness, I mostly agree with you, but not entirely. I think there are other cognitive roles for theory besides generating testable hypotheses. For instance, Chesson and Huntly 1997 Am Nat (but I’d hope that other general ecology journals would’ve published it too) does conceptual clarification. It doesn’t use mathematical modeling to generate testable hypotheses about the relationship between diversity and disturbance/harshness. Rather, it shows that some established “testable” hypotheses on this topic actually aren’t testable because they’re logically flawed–they’re non sequiters because the assumptions don’t actually imply the predictions they’d been thought to imply.
As another example, think of R. A. Fisher’s model of why most sexually reproducing species only have two sexes. To answer that question, he asks what would happen if there were more than two. Is testing that model “conceivable”? Well, I guess it depends how far you’re willing to stretch “conceivable”, e.g., if you can conceive of genetically engineering a 3-sex species. But if you’re willing to stretch it that far, then I think you’re more or less saying that anything is “conceivable”. In which case “conceivably testable” has ceased to be much of a constraint on what sort of theory general ecology journals should publish. In practice, of course, I suspect many ecologists would agree with you on the importance of being able to at least “conceive” of how to test a theory. But they’d construe “conceivable test” too narrowly for my taste, taking it to mean, e.g., “I can conceive how to estimate the model’s parameters from data.” Thereby dismissing as empirically irrelevant a lot of theoretical work–like Fisher’s model–that actually is empirically relevant, but that isn’t ” conceivably testable” in some narrowly-construed way.
As a third example, think of May’s stability-complexity result. Is testing that “conceivable”? Well, many ecologists have thought so–but arguably most or all of them were wrong. Many attempts to test May’s idea are very flawed–the data used are so far removed from what May actually modeled, and linked back to May’s model by such a long chain of loosey-goosey arm wavy reasoning, that it’s questionable if the tests are very informative (one of my own papers is an example: Fox and McGrady-Steed 2002 JAE). The reason is that May’s model is *so* abstract, and makes *such* strong simplifying assumptions, that it’s hard to see how to bring data from *any* particular system to bear on it. So does that mean May’s stability-complexity result is of interest only to theoreticians? Far from it! Even if you can’t test it (at least not very well, or at least not in any usual sense of “test”), it still serves a very useful cognitive role. It undermines the intuition (widespread at the time, and probably still widespread today) that *obviously* “diversity” (in some vague, unspecified sense) begets “stability” (in some vague, unspecified sense). In fact, it’s not obvious at all. May’s result shows that vague intuitions won’t cut it here–you have to be very precise about what you mean by “diversity” and “stability” in order to develop a testable hypothesis about how they’re linked. Which is something that it’s vital for empiricists to recognize–otherwise they’ll make serious mistakes when interpreting their data.
A further thought: it’s important to remember that “how could this theory be tested?” is not the same question as “how could this theory be tested in the sorts of systems with which I’m familiar, using the sorts of approaches with which I’m familiar?” No idea how often those two questions get mixed up on the minds of empirical ecologists reading theory, but I’m sure it happens, and I suspect it’s not rare.
For instance, see this recent post discussing Marquet et al. 2014: https://dynamicecology.wordpress.com/2014/10/29/marquet-et-al-on-theory-in-ecology/ Marquet et al.’s author list is a bunch of top ecologists, very sharp, well-read people. They complain that R* theory is hard to test because you have to measure 3 parameters per species to test it. Which is flat out wrong: I and others have tested it, in papers published in top journals, without having to measure 3 parameters per species. I suspect that the reason they were so wrong on this was just a failure of imagination. Smart and well-read as all those folks are, none of them actually works on R* theory, so they just couldn’t imagine how it could be testable using different approaches from the ones with which they’re familiar.
We all have limited imaginations. Until reading Brian’s post on the science being done in Biosphere II (https://dynamicecology.wordpress.com/2014/03/11/in-praise-of-a-novel-risky-prediction-biosphere-2/), I wouldn’t have imagined that Biosphere II could be used for anything scientifically interesting, except maybe as a giant controlled environment chamber in which you could house mesocosm experiments. A failure of imagination on my part.
So yeah, I do worry that when ecologists write off some piece of theory as “not even conceivably testable”, they too often are just exhibiting a failure of imagination. Though I freely admit I have *no* idea how common that is.
The apocalypse is nigh. I’m merely going on some BES advice to take the day before off:
I guess I’ll know tomorrow whether that was a good call on my part!
I suspect I’m guilty (as a raving empiricist) of being utilitarian about theory. Naturally, testability is of interest to me, so I’ve never considered the merits of theory as a cognitive tool for challenging intuition. Makes sense.
You could view that as a positive aspect of empirical / theoretical back-and-forthing. But perhaps there are also examples where unrealistic theory and ill-conceived experiments reinforce one another, and this might be where reasonably mechanistic models ( = realistic, in my mind) become increasingly valuable, and perhaps also why there’s a perception that the journals prefer a bit of mechanistic grounding.
To inject something which is tangentially related to this discussion (caveat: from a field I know next to nothing about) I’ve enjoyed (ish) reading this book on cosmology:
http://www.cambridge.org/gb/academic/subjects/physics/history-philosophy-and-foundations-physics/singular-universe-and-reality-time-proposal-natural-philosophy
Podcast that led me to the book:
http://www.theguardian.com/science/audio/2014/dec/08/cosmology-robert-mangabeira-unger-universe-time
Amongst other things, the authors claim that cosmology is in crisis because much of the contemporary theory rests on edifice upon edifice of mathematical assumptions which make strange and, crucially, untestable predictions about the reality of time and space. In the view of the authors, this is no longer science, but an abstract exercise in maths.
Drawing on your example of May and diversity-stability, the difference here is that the predictions aren’t strange per se (in fact, perhaps the notion that diversity should confer stability is the stranger of the two). It seems inevitable that even the most abstract theory in ecology should have some connection to a reality of sorts. In the example of three sexes, the third is simply a tool to illustrate the why the two sex model prevails, and in this sense it’s strongly connected to reality.
Back when I was a grad student at Rutgers, the profs used to scare the students with the story of a grad student who’d turned in a hastily-written thesis and then took the whole *month* before his defense off to go hiking in Tibet. He thought the defense was a formality. He was incorrect. But I think your approach of preparing well (I presume!) and then taking the day before off sounds eminently sensible. Good luck!
Re: unrealistic theory and ill-conceived experiments reinforcing one another: it can happen. But in my experience, it happens in ecology when the theory in question is mistakenly *thought* to be realistic (at least realistic enough to be testable). And the cure for it, or a cure for it, is new theory that demonstrates the unrealism of the previous theory. Which the new theory can do without itself being realistic. That’s what happened in the case of the intermediate disturbance hypothesis, for instance–see Fox 2013 Trends in Ecology and Evolution (which draws heavily on Chesson and Huntly 1997). Or search our archives for my various posts on the “zombie idea” of the IDH.
Re: the cognitive role of theory for challenging pre-theoretical intuitions, I think that’s a hugely important and underrated role of theory. And so I worry that the costs to ecology if too many ecologists are over-quick to dismiss the value of “unrealistic” theory. The gains to ecology from ecologists ignoring stuff that’s only of interest (if at all) to mathematicians might be outweighed by the costs of mistakenly ignoring stuff that we ought to pay attention to because it corrects our intuitions or serves some other cognitive benefit besides “providing testable hypotheses”.
A couple more old posts that are relevant here:
https://dynamicecology.wordpress.com/2013/05/02/false-models-are-useful-because-theyre-false/
https://dynamicecology.wordpress.com/2013/10/10/on-the-value-of-simple-limiting-cases-lotka-volterra-models-and-trolley-problems/
EDIT: And this one from Just Simple Enough, listing the many uses of theory besides “generate testable hypotheses”: https://theartofmodelling.wordpress.com/2012/03/08/making-the-list-checking-it-twice/
While I generally agree with your view and conclusions here, I want to offer a suggestion for future polls. To get results that are even more representative of the community of ecologists, I think you should consider posting the polls with just a short background, without going too much into the issue. Priming is a well-known phenomenon in polling and psychology, and the background given to this particular poll certainly had these problems (e.g. “Which seems problematic, at least to me.* I think leading general ecology journals should seek to publish the best work that ecologists do, including theoretical work.” and “And think of all the important theoretical papers that rightly have a had a big influence on all of ecology without being “realistic””).
I realize that your point is not to conduct a scientific study, and there are many other sampling issues as well. However, I really find your polling of ecologists on different topics interesting, and would therefore like the results to be as good as possible.
Fair points Tobias. I’m certainly well aware of priming effects. We’ve done polls both ways in the past–with little preamble, and like this one where the post author makes his own views clear up front. And FWIW, we do sometimes get responses where the majority disagree with the post author (even though, as a group, people who read this blog with any regularlity are likely to agree with Brian, Meg, and I on lots of stuff). And as you note, either way the poll won’t be a proper scientific poll. But your point is well-taken, I’ll keep it in mind next time I’m planning a poll.
Jeremy, I really like your writing on this topic, and also all the great comments. When you say above about the lack of creativity on the side of readers: I wonder if the solution shouldn’t be that the ones writing the paper guide the reader in that. I for one always really like if papers end up with potential findings that would refute the theory. I can’t think really of a theory that doesn’t do this, that I nonetheless appreciate….. not to say it is impossible though….
Re: 2 sexes: many fungi have more than 2 mating types. Though not the same as sexes, it might be a nice way to test costs and benefits of increase or decrease of sexes.
Thanks Erik!
Some theory papers do guide readers as to how the theory might be tested. But I don’t think lack of that sort of broad guidance is what hinders theory testing. Again, Marquet et al. didn’t recognize how you can test R* theory just by measuring R* values, even though I and others had published empirical papers in leading journals doing just that.
Quite interesting that in behavioural ecology/animal behaviour a rather contrary call (more data! less concepts/theory!) has been made very recently:
DiRienzo & Montiglio:
Four ways in which data-free papers on animal personality fail to be impactful
http://journal.frontiersin.org/article/10.3389/fevo.2015.00023/full
cheers
Gregor
Interesting. It’s a complaint about a specific sort of theorizing, of course–development of “conceptual frameworks”, or “syntheses” of previous conceptual frameworks. Which is different than developing a testable mathematical model, at least to my mind.
It’s my anecdotal impression that that sort of conceptual framework development is increasingly prominent in ecology. I think sometimes it can be hugely valuable, especially when it’s done via formal mathematics. I’m thinking for instance of Peter Chesson’s framework for coexistence theory in community ecology. But yeah, when it’s just a verbal framework, often it seems that it’s just the authors arguing for their preferred way of looking at the world. It’s almost a sort of advertising. And even when it’s not, I think its value gets overrated sometimes–people mistake a conceptual framework for a testable hypothesis. I think that’s what happened with the “metacommunity” framework of Leibold et al. 2004.
Pingback: Weekly links round-up: 27/3/15 | BES Quantitative Ecology Blog
Pingback: Is Theoretical Biology Dying? | Mathemagical Conservation