People have been bugging me to blog about this, and “Give the people what they want” is my motto*, so here goes.
Writing in a forthcoming PNAS article, Fawcett and Higginson report the results of a citation analysis showing that, the fewer equations included in the main text of a ecology or evolution paper, the less likely it is to be cited by the most rigorous papers, although the more likely it is to be cited in the least-rigorous papers. Tellingly, equations in appendices have no effect on citation rates in the least-rigorous papers, indicating that many readers will happily ignore equations when given the least excuse to do so. Fawcett and Higginson take this to indicate a worrying preference for imprecise verbal arguments and handwaving on the part of too many ecologists and evolutionary biologists. They suggest that equations always be presented in the main text so as to force readers to pay attention to the core content of the paper.
Well, that’s what they could’ve said, but they didn’t.
Actually, what they said was that, the more equations included in the main text of an ecology or evolution paper, the less likely it is to be cited in non-theoretical papers, but the more likely it is to be cited in theoretical papers. The former effect outweighs the latter, presumably because non-theoretical papers outnumber theoretical ones, so the more equations in the main text, the less a paper is cited. Equations in appendices don’t affect citation rates. Fawcett and Higginson suggest that theoretical papers need to place more emphasis on words explaining the math, and less emphasis on showing the math itself, in order to increase their impact with non-theoreticians. They also suggest more mathematical training in graduate school, but note that this is a long-term proposal that’s unlikely to happen, for all sorts of reasons.
I actually think Fawcett and Higginson’s paper is fine as far as it goes, I just don’t think it gets at the ultimate issue. Their basic empirical results are hardly surprising, of course. Theoretical work in any area of biology is at least partly, and often almost entirely, its own subfield. That’s why theory journals exist. Theoretical papers have equations in the main text because they’re aimed at theoreticians. Such papers can’t bury all the equations in appendices any more than empirical papers can bury the methods, statistical analyses, and results in appendices. Theoreticians who write equation-heavy papers are not poor communicators (at least not necessarily), they’re good communicators for the audience they’re trying to communicate with. (Plus, even theory papers typically do bury a lot of the equations in appendices—it’s just that, even after doing that, they’re still left with numerous equations in the main text!)
And if you say, “So how come theoreticians write for that narrow audience, instead of making more effort to communicate with the non-theoreticians who want to test their theories?”, my response is, “How come non-theoreticians don’t learn more math?” Understanding is a two-way street. And understanding, not citation, or “impact”, or even “communication”, is the ultimate issue here. Yes, absolutely, theoreticians vary in how good they are at helping readers understand their work.** But exactly the same thing is true of non-theoreticians! Any scientific paper, theoretical or not, necessarily assumes a lot of background knowledge on the part of the reader. Scientific papers are written for people with Ph.D.’s, not undergraduates. Many Ph.D. ecologists’ and evolutionary biologists’ last math class was a now-forgotten first-year undergraduate calculus course. Why should theoreticians be under a special obligation to write their papers at that level? Why are non-theoreticians not under a similar obligation to write their papers for an audience whose last courses in natural history, field methods, zoology, botany, statistics, etc. were early in their undergraduate careers?
Far more important than the issue of who cites what, as far as I’m concerned, is who understands what. The sad history of early attempts to “test” MacArthur’s competition models is a case in point. MacArthur’s papers were fairly light on equations (as far as I recall…), and in any case were hugely influential with non-theoreticians, right down to the present day. Infamously, they were also widely misunderstood by non-theoreticians, and I doubt that was because MacArthur was a particularly bad explainer. Further, getting non-mathematical readers to understand math is not as simple as burying the math in appendices so that you can fit more words into the main text. As Caroline Tucker at The EEB and Flow notes, all that does is give readers an excuse to ignore the math entirely. Non-theoreticians may well cite such papers more often—but do they understand such theoretical papers any better than papers with more equations in the main text? Or do they merely think they understand such papers better?
I admit that I’ve become more pessimistic over time on the possibility of really bridging the gap in understanding between theoreticians and non-theoreticians. That’s in part because of my own increasingly-lengthy experience trying to explain what’s wrong with the intermediate disturbance hypothesis to non-theoreticians reading my blogs. It’s been really, really difficult, and not because my silly zombie jokes and other attention-grabbing rhetoric have gotten in the way. Nor has it been because I’ve used equations, because I haven’t used any. And I’ve still ended up with massive comment threads packed with questions from smart but non-theoretical readers who’ve misunderstood things. Now, part of the difficulty is because I’m trying to unteach an established idea, rather than teach something about which readers have no previous knowledge. But I don’t think that’s all of it, or perhaps even most of it. I think it comes down to non-theoretical readers just having a hard time “getting” how even very simple mathematical models work.
And as my own extensive experience teaching introductory mathematical modeling to ecology undergrads has taught me, the only way most non-mathematicians can really learn to “get” mathematical models is by actually doing math. Figuring out for themselves the equations that correspond to some specified biology. Plotting the shapes of the functions that comprise the model. Actually doing the algebra to solve for the equilibria. Solving for the isoclines and plotting them. Coding up the model and simulating it for different parameter values and initial conditions. Etc. Non-theoreticians routinely scoff at the notion that anyone without extensive field experience can ever really “get” how wild nature works. Well, if you don’t have extensive “field experience” with math, why would you ever think that you could “get” mathematical models just by reading words?
I emphasize that I don’t think everyone should learn more math and less about everything else. I’m not privileging mathematical over non-mathematical knowledge here, and I’m not criticizing my many non-theoretical colleagues for not knowing math. I’m just saying that, if few people cite mathy papers, that’s mostly not because of how those papers are written. It’s mostly because few people understand math. Whether and how that could be changed, and what the costs as well as benefits would be, is a whole ‘nother conversation.
UPDATE: Fawcett and Higginson conclude their paper with the famous line from Steven Hawking about how he only included one equation in A Brief History of Time because he was told that every equation would reduce sales by 50%. Which seems like a poor example to reinforce the message that words can substitute for math. Yes, the book became a bestseller–which was infamous for not actually being read or understood by the vast majority of people who bought it.
UPDATE #2: As noted in the comments, I could’ve just linked to Florian Hartig’s excellent discussion, much of which I unwittingly recapitulated. And Tim Fawcett himself pops up in the comments on Florian’s post.
*”…so long as what they want is zombie jokes” is the second half of my motto.
**Steve Ellner, perhaps the best theory-explainer in all of ecology and evolution, has good advice on how to write theoretical papers that non-theoreticians will cite. Notably, his advice is not simply “bury all the equations in appendices”.
“equations always be presented in the main text so as to force readers to pay attention to the core content of the paper”
“Content” is really the key word in all of this. A major reason to deal with models in the first place is to gain the ability to speak in specific and clear terms about the players and processes in a given natural system (in other words, the content of those systems). Non-theoreticians run around with models in their heads all the time. Those models just happen to be more likely to be poorly defined, to deteriorate over time, and to mislead.
You’re absolutely right.
Great food for thought, Jeremy.
There is a pernicious trend (in my opinion) in many journals, including the most “high-impact”, to relegate most, if not all, methodological details to the end of the paper, use smaller typeface, or insist on presenting them in the supplementary material. This is something I think you’ve mentioned before, and it bugs the crap out of me. The Methods are the most important part of any study. Without understanding those, the results just can’t be correctly interpreted.
It’s interesting how often this sentence can be turned on its head. How often can empricists actually “get” how wild nature gets, in the absence of theory? I know of at least one case where a very, very experienced field ecologist’s (intuitive) ideas were totally unsupported by pretty high quality field data and statistical analyses (carried out by his son). Still makes me smile, but this is the one of the most fun parts of science for me – finding out that our ideas are wrong and need to be corrected.
I’ve had Steve Ellner’s writing advice stuck up on my wall, just beside my computer screen, for the last few years. It’s excellent, along with George Orwell’s Politics and the English Language (see his 6 rules near the end). I try to think about the points they make as often as possible when preparing a manuscript, but they still fail to make sufficient impact on my writing 😦
You’re not alone in finding the trend towards “bury the details in online appendices” to be pernicious. I know Bob Holt for one feels the same way. If it’s something readers need to know to understand the paper and know if its conclusions are justified, it belongs in the main text. If not, then it doesn’t belong in the paper at all. Ok, there are exceptions; Steve Ellner articulates some of them. But that should be the default stance, in my view.
One effect of this trend, I think, has been to decrease the difference between Nature and Science papers, and “ordinary” papers. Time was when Nature and Science papers really were different–they were truly incisive, short but deep. Nowadays they’re just formatted differently, like extended abstracts of regular papers, with the paper itself buried in lengthy online supplements.
Re: intuition unaided by theory, you’re absolutely right. As the previous commenter said, “intuition” is often just another word for “model so vague and confused I can’t even describe it in any detail”.
Whoa! Back up a second here please.
You can put me at the very top of the list of those who think this shunting of critically important methodological detail into a couple paragraphs at the end of the paper, or into the SI, is a serious problem and a big mistake. Moreover, I posit that it is driven by the same mentality that favors glitzy “positive breakthrough” findings at the expense of those dreary papers that only gloomily point out the problems with existing studies. I mean, we must all always remember that “there are no problems here in science”! Indeed, we must repeat this daily to ourselves and our close associates! /sarcasm.
But the issue of how we “know” things is a much bigger, and largely different issue. I fully understand that numerous complex phenomena cannot be well understood–or logically demonstrated–outside of a theoretical model, typically one that is highly mathematical. I agree with this. I also understand that depending on “intuition” for system level understanding can lead to some very wrong ideas. But it’s also true that a poorly fomulated (or coded) mathematical model will do that too, and it can be a helluva task to work through it to find out where that error is, assuming someon has even recognized that it exists in the first place.
How we know things is a deep issue, a human pyschology issue. Do I have any doubt that many/most native peoples knew more ecology than I’ll know in a lifetime of dreaming? No I don’t. They knew it in a different way, and a different spatio-temporal scale than I do, both in terms of factual information and ability to predict future events, and there were certainly issues that they couldn’t possibly address. But there are issues that a whole boatload of models don’t address well either, and it’s primarily I would argue, because they don’t have the data they need at the right scales, to address those things. Field people can acquire an enormous volume of relevant data, especially when the subject involves many components, say community level and higher. They may have difficulty articulating it, or interpeting what it all means in a clear conceptual framework, but really isn’t that part of the process of how the human mind works? Conversely, the theoreticians may have the beautiful, “elegant” (God how I hate the use of that word in that context!) tidy description, but they may be leaving out enormous detail, and in fact they may have chosen their topic in the first place because they know they have no chance whatsoever of a theoretical description of something more complex. I mean, it is possible to get a little tired of “imagine a spherical cow” excercises.
We’re not going down the theoreticians vs empiricists food fight alley here again are we?
Your points are all well-taken, though I’m not sure about “knowing in a different way”. For instance, some might find it easier to grasp math if it’s illustrated graphically, which is the motivation for Ted Case’s Illustrated Guide to Theoretical Ecology textbook. But that’s learning in a different way, which still leads to the same endpoint. And I don’t know that I agree that this is about detail-oriented empiricists not wanting to assume spherical cows. My suggestion is that there are many ecologists who don’t “get” math, no matter whether it assumes spherical cows or cow-shaped cows. Put another way, the number of equations in a paper has little or nothing to do with the realism of the mathematical assumptions, and the placement of equations in a paper (main text vs. appendices) surely has nothing to do with the realism of the mathematical assumptions.
And if you have a lot of data but can’t interpret it or articulate a clear conceptual framework to understand it, doesn’t that mean you’re just confused? Ok, maybe confusion is inevitable if your data are complicated enough, math or no math. But it almost sounds like you want to treat confusion as a virtue, or at least the symptom of a virtue?
A couple of further thoughts Jim:
The challenges I’ve run into trying to explain how to think about disturbance effects on coexistence illustrate my point. The difficulty here is that many readers seem to struggle to grasp how a very simple hypothetical situations plays out. It’s not that they don’t like assuming a spherical cow, it’s that they struggle to grasp why a spherical cow casts a round shadow and not a square one.
If you’re right that the issues here are down to very different cognitive styles, it’s hard to see how to address them. Which doesn’t mean you’re wrong, of course.
“And if you have a lot of data but can’t interpret it or articulate a clear conceptual framework to understand it, doesn’t that mean you’re just confused? Ok, maybe confusion is inevitable if your data are complicated enough, math or no math. But it almost sounds like you want to treat confusion as a virtue, or at least the symptom of a virtue?”
No! I think that’s where we disagree primarily here. Articulation is not the same as understanding. Articulation involves language, understanding does not necessarily. It may, or it may not.
In science, mathematical models *are* the language used to articulate hypotheses and ideas. Understanding involves confronting those models with data.
Yeah no kidding. That wasn’t my point.
Jim, sincere apologies if I’m misrepresenting you here, but I have the feeling that you might draw a clear distinction between statistical and mathematical ecological models. I know that many ecologists do this.
Empiricists are often skilled users of statistical models, to try to understand the simple or complex data they have collected. But a statistical model is still, fundamentally, a mathematical model: y = ax + b + error.
Many statistical models have direct analogues in what are thought of as ecological models. Fitting a linear regression of density against per-capita growth rate from a sampled natural population is the same as fitting the logistic growth model to your data.
Having said that, to go out and collect a complex data set without a good a priori hypothesis (which should be defined clearly, verbally and mathematically) is very poor scientific practice. And to suggest that native peoples knew more ecology than you do is, with the greatest respect, revisionst horsecrap. Pocohontas wouldn’t have a fecking clue how to get quickly and efficiently around Safeway or download the latest paper from JSTOR to help write that next grant application – or other important apects of current human ecology. And I’m pretty sure she didn’t know much about the island theory of biogeography, or (the death of the) intermediate disturbance hypothesis either. In fact, IIRC, she practically wiped out the maority of North American megafauna, all on her own!
I’m not saying anything like that Mike. Looks to me like the food fight has started in earnest. I was hoping it wouldn’t go down this road. Seriously, why does there have to be this animosity and mis-characterization between theoreticians and modelers vs empiricists? I thought we were beyond that.
My fullest apologies then, Jim. I personally don’t think there is a gap between the best empirical and theoretical (mathematical) work, and there are great synergies between them. And I don’t want to start, or even join in, the food fight. I genuinely wasn’t trying to mischaracterise you. Maybe you can expand on what you mean here:
What I take from it is that you’re saying some models aren’t good because data isn’t available to test (in/validate) them, but data can be collected. (Please correct me if that’s not so)
I’d disagree that models aren’t good just because they can’t immediately be tested – models can provide new hypotheses to test with natural data, based on current, limited understanding of a given question/system. They can even suggest what sort of data needs to be collected to be able to gain deeper understanding. I surely agree that large, complex data-sets can be collected, what I’m concerned about is that there’s no solid guiding question (either verbal or mathematical) behind exactly what sort of data is collected. Such data might be used to address other questions, but that’s sometimes a dangerous route to go down, leading to arbitrary (or deliberate) decisions about what data to include or leave out, which can have crucial effects on results and interpretations. This is true whether you feel your question is empirically or theoretically driven (even though I don’t think that’s a sensible division to draw).
I’m a bit lost with this part. Data have been collected (and possibly) analysed, that no one really understands, but that’s OK, because epistemology and psychology?
OK, thanks Mike, I appreciate that. I don’t have time for the detailed response needed right now, so let me get back to you later on it, after having a chance to look at the cited paper, which I just received. Am more than a little confused about the relationship between this post and that article.
“Am more than a little confused about the relationship between this post and that article.”
Oh, so now it’s MY fault you and Mike are arguing? 😉 Just kidding. As I tried to make clear in the post, but perhaps failed to do, I think the PNAS paper is basically fine, it’s just that it doesn’t really address what in my view are the underlying reasons for the results.
Here’s a summary of what I was trying to say in the post: Theoreticians who include more equations in the main text of their mss are mostly not poor communicators, they’re good communicators for the audience they’re trying to reach, namely other theoreticians. And while they might be cited more often, or more broadly, if they buried their equations in appendices, that wouldn’t be because such equation-free mss communicate better, it would simply reflect the fact that many non-theoreticians are math phobes. Indeed, there are severe limits to how well one can communicate Ph.D-level mathematical models to readers who lack any background in mathematics and who don’t want to acquire one now. Rather than talking about how to get non-theoreticians to cite theory, we should be talking about the much more important, and more difficult, issue of how to get non-theoreticians to understand theory.
I know that. I’m saying there is a kind of understanding that goes beyond any kind of language, mathematical or verbal. A mathematical understanding is not fundamentally different from a verbal understanding–they both involve symbolic logic, which is a human construct.
Can you fit a “verbal model” to data? No.
Maybe of interest – I blogged about this paper a while ago with similar thoughts, and Tim Fawcett, who is the first author, replied
Thanks Florian! I admit I am late to the party on this.
I’m pretty sure I’m not going to get the time to respond to this post and the comments. I like the vast majority of what you put up Jeremy, but I most definitely do not like this one *at all*, and I will say that it is exactly this kind of attitude that leads to the animosity between theoreticians and empiricists. Basically what you seem to be saying is that theoreticians have a superior kind of “understanding” relative to those who are not theoreticians. And there are holes big enough to drive a truck through in that argument, both in terms of the thesis itself and in the assumptions embedded in the thesis, i.e. what it means to understand something. I urge you to go back and read what you wrote again.
Fair enough Jim. But I have re-read it (before you asked me to), and I stand by it. And for what it’s worth, I know at least one non-theoretician who agrees with me 100%, in addition to the theoreticians who’ve posted here. Does that mean I’m right? Absolutely not; you know I don’t believe in proof by authority or by common assent. But I do think it means that, if I am wrong, it’s not patently obvious.
I also mean what I said at the end: this is NOT about exalting math over non-math, or theoreticians over non-theoreticians. The issue my post (and Florian’s and Caroline’s posts) meant to address is much narrower: why are math-heavier papers less cited in the non-theoretical literature, and what underlying issue is that a symptom of? If you were to study rates and patterns of citation of data-heavy papers in the theoretical vs. non-theoretical literature, I would not be at all surprised if you found results symptomatic of some different underlying issue.
As for whether my post will lead to animosity, we’ll see. But that doesn’t mean my post is incorrect.
Well it’s already led to animosity, so we don’t need to guess on that. I mean, re-read your first paragraph. What I’m hearing you say here, along with those who agree with you, is that it’s OK to believe that theoretical knowledge is superior to empirical knowledge. Well, it *isn’t* OK. It’s arrogant, and it isn’t even correct to begin with.
Yes, the first paragraph is me doing my usual snark-to-get-the-reader’s-attention-and-make-the-post-entertaining-routine. But I don’t think it crosses any lines. In particular, the bits about rigor are not wrong. Conclusions deductively derived from precisely-stated premises, as in a typical theoretical paper, are indeed as rigorous as conclusions can be. That they are rigorous of course does not imply that they are true, or even useful, which may perhaps be part of the source of your unhappiness. I did not mean the word “rigor” to be a stand-in for all virtues that a paper might possess.
But there is a whole rest of the post, and my repeated comments to clarify, which I take it you find unhelpful. That many non-theoreticians struggle with math, and that this is ultimately why they do not cite math-heavy papers, and that this cannot be solved simply by writing theoretical papers differently, would, I’d have thought, be uncontroversial claims. Nor would I have thought that these claims could be confused with claims that theory is superior to data, or that theoreticians are superior to non-theoreticians. I’m not sure what more I can say to clarify. As always, I’ll revisit what I’ve written if I get negative feedback from others, or if you have new points to make. But as it stands, I don’t really have anything further to say that I think would advance the conversation.
Sorry Jim, I don’t like reaching an impasse with our best and most active commenter, but there it is.
You put up lots of good stuff Jeremy, as I said , but every now and then you lapse into the typical academic tendency towards flippancy and arrogance. Sorry, gotta say it, you’ve really irked me with this one. Don’t complain about empiricists bashing theoreticians if you’re going to bash empiricists.
One way to deal with the issues raised in the post is via collaborations between theoreticians and non-theoreticians. There’s a nice paper forthcoming in Ecology Letters with much practical advice on how to make such collaborations effective:
Pingback: Interactions: Foggy with a Risk of Mathematics | Altmetric.com
Pingback: Musings on the culture of ecology | Dynamic Ecology
Pingback: E. O. Wilson vs. math | Dynamic Ecology
Pingback: On progress in ecology | Dynamic Ecology