Should your paper anticipate potential criticisms?

Here’s something I often struggle with when writing my papers: should I anticipate and address potential criticisms? And if so, where–the methods, the discussion, or the online supplements?

In general, your paper should explain, motivate, and justify what you did, not merely describe what you did. That often means explaining why you didn’t do things some other way. On the other hand, you don’t want your paper to sound defensive. “The lady doth protest too much” and all that. Nor do you want to interrupt the logical narrative flow of your paper by raising and addressing possible objections.*

I struggle with this in part because I work in a system–microcosms–to which some ecologists have blanket objections. These objections are groundless**, but they crop up sufficiently often that I can anticipate them. And if you can anticipate how readers are likely to object to (or misunderstand) your work, shouldn’t you head off those objections and misunderstandings? If for no other reason than to prevent your paper from being rejected? Worst case scenario, the reviewers tell you to cut those passages as unnecessary.

If possible, frame your defense of your approach positively rather than negatively. For instance, I’m currently writing a paper on a microcosm experiment from which, unusually, we only collected presence/absence data rather than abundance data. I could defend this choice by explaining how presence/absence data are adequate for our purposes, even though they contain less information than abundance data. But I plan to use a more positive framing: collecting presence/absence data allowed us to run a larger experiment, with the full range of treatments needed to rigorously test our hypothesis. Both framings are equally correct, but the latter will sound much better to reviewers and readers (I hope!)

I also struggle with whether to justify my approach in the methods or discussion. Some people routinely include a subsection near the end of their discussion called “caveats” or something like that, in which they discuss the limitations of their approach. But others do it in the methods. And these days many people bury discussion of alternative approaches in the online supplementary material. In particular, alternative ways of doing one’s statistical analyses commonly get relegated to online supplements. And I routinely use online supplements myself when I’m writing about the Price equation, to address common misunderstandings of, and objections to, the approach. But I don’t think the trend towards increasingly-voluminous online supplements is a good thing on balance (see here, here, and here). Plus, it’s not always obvious what to put in your online supplements.

Of course, one way to avoid the entire issue is just to cite a paper that addresses any potential objections to your approach. I think this works best if there’s a recent review paper or other “standard reference” you can just refer readers to. Which is a problem, since you’re most likely to run into objections when you’re doing something new or otherwise non-standard. If it was already well-established that your approach was The Right Way, you wouldn’t need to justify it. More subtly, as a reviewer I tend to be a bit suspicious of authors who justify non-obvious methods this way, because sometimes I follow up their citations and find that those citations don’t actually address my objections (or sometimes, even say what the citing authors claimed!)

The other reason I struggle with this issue is perhaps specific to me. When reading a paper, I often find myself wondering about how the same broad topic could’ve been addressed in some totally different way. For instance, if you tell me that you used local-regional richness relationships or a phylogenetic approach to try to infer if interspecific competition affects local species composition, my first question will be “Why didn’t you just do the sort of manipulative experiment Shurin (2000) did to answer that question directly, rather than making an inference based on a bunch of dodgy assumptions?” As another example, if you tell me that approach X wasn’t tractable in your system, my first question will be “Why didn’t you work in a model system instead, so that you could use approach X?” Not because I think there’s a single “right” or “best” way to address any given question–I don’t. But just because I attach a lot of importance to having good reasons for what one does, and to spelling those reasons out (see here and here). That’s the way my mind works, and so I tend to assume that other people’s minds work the same way. Perhaps causing me to worry more than I should about justifying my own methodological choices.

Do you try to anticipate and address potential objections to your work? If so, how do you do it?

*Unless of course the narrative for your paper is something like “here’s a problem that other approaches either have, or cannot solve, that my approach solves.”

**The next even semi-defensible blanket objection to microcosms that I encounter will be the first.

14 thoughts on “Should your paper anticipate potential criticisms?

  1. I do think it’s worth anticipating likely criticism, but I agree strongly with you that one wants to do so positively, not defensively. Hence I think it often belongs in Methods, along the lines you suggest. For some reason we teach undergraduates to write Discussion sections that are heavily laden with long lists of excuses for why things didn’t “work out”. Then when we start writing papers instead of lab reports, this transmogrifies into a long list of reasons why things that don’t seem right – or that seem not to have “worked out” – actually are/did. But after a reader has decided that X was inappropriate or inconclusive or whatever is not a very effective time to correct that impression!

    • I was hoping you’d comment and tell me what to do, Stephen–thanks! Although I don’t think my own uncertainty on this traces back to my undergrad training.

      “But after a reader has decided that X was inappropriate or inconclusive or whatever is not a very effective time to correct that impression!”

      That’s a good way to put it.

      • Ha! I write one book and people think I’ll “tell them what to do” 🙂

        I’m looking forward to the rest of the comments thread on this one. What do other folks do?

      • Indeed, having written a book should make it less likely that you’ll tell people what to do–because you can just refer them to your book! 🙂

  2. Great question. Students/postdocs and I discuss this for many manuscripts, and I think of it as part of the “art” of scientific writing (one of many elements with no black-and-white answer). I think there are several related motivations for doing this, and for doing it as early as possible in the paper:

    (a) Communicate that you understand the limitations of what you’ve done (reviewer not impressed if they have no evidence that you even understand this).
    (b) Convince reviewers/readers that these limitations are not fatal.
    (c) Prevent reviewers from being disappointed (which they might be if you only bring it up very late in the paper). Feeling let down does not increase the odds of acceptance.

    Of course you can’t have a long laundry list of limitations/caveats like in an undergrad lab report. If from experience, you know many people are bugged by something, or always ask about it after a seminar, or it’s a pretty major issue, head it off at the pass, so to speak. We don’t worry too much about minor things.

    A related approach is just to articulate conclusions in a way that are true to the data and system (i.e., don’t overstate a claim, another art since we want our claims to be as “big” as possible) – this is essentially an implicit rather than explicit recognition of limitations.

    • I think Mark has got the gist of it. You do NOT want to do what I see so many graduate these do – spend most of the discussion on a long list of why their work is flawed.

      You do want to subtly and positive acknowledge and argue away the major issues. Signalling awareness helps a lot with reviewers and gives you a chance to slip in your counterarguments.

      But many people seem to have a hard time figuring out how to do this subtly and positively and end up with the “top 10 reasons my paper sucks as a discussion section”.

  3. I’m now trying to think of people whom I think are particularly good at heading off potential objections.

    Graham Bell is one. He works in microcosms, and with very simple theoretical models, so his work is open to various obvious objections. But he’s great at framing the question, and his way of addressing it, in such a way as to totally defuse or sidestep any objections, but without ever sounding the least bit defensive.

  4. If you can do it, a great way to respond to a potential objection is to show how it actually strengthens your point. For instance, when doing modeling work, you’ll often run into the objection “but what about the effects of factor X, which your model omits?” If including factor X would actually strengthen your conclusions, then you might want to anticipate the objection and bring it up in your paper.

    • Yes – per my comment above, if one is having trouble getting the nuance right, my recommendation as you suggest would be to go look at papers that do this well and see how they do it.

  5. Yes to an extent. If my coauthors are wondering why we didn’t do it another way (and as a “trainee,” there’s always been at least one co-author who wasn’t deep into analyses), then I think I ought to justify our approach in the paper, because almost certainly a reviewer will ask.

    And if I’m doing anything citizen science, I always need to do extra justification, because the bar for citizen science data is higher.

    Otherwise, I try to gauge whether criticism would cause a reviewer to “mark down” the whole paper. If it’s unlikely to move a “major revisions” to “reject” in someone’s mind, I leave it to see if anyone comments.

    My biggest frustration is when doing something new methodologically (which I seem to be particularly fond of doing), a get a reviewer who is not particularly statistically savvy suggesting I analyze things in a way that is clearly a no-go. I imagine it must be similar to how you feel about the Price equation. (i.e. “my job as an author is not to education you on an entire field of mathematics in this one manuscript.”) That said, I love it when I get reviewers who are *more* stats savvy than me and point out flaws or oversights in my approach. Then I learn stuff and my paper gets better and everyone wins.

    • Re: the Price equation, I actually don’t get frustrated with reviewers not getting it, or asking that I do something inappropriate or nonsensical with it. I expect to have to explain it. It’s not like basic statistics, with which you’re entitled to expect a certain level of expertise on the part of every reviewer and reader. And it’s easy for me to explain the Price equation because I’ve done it before. I just lightly edit the explanatory blurb I’ve already got and stick it in an appendix. Indeed, I’m a bit proud to have gotten reasonably good at explaining the Price equation, and I like it when the lightbulb goes on in a reviewer’s head and they “get” it.

      As an aside, I’ve now been involved in various grant proposals that have made use of the Price equation, and in which there’s usually not much room for explanation. But it’s never been a problem because most grant reviewers just seem to trust that I know what I’m doing when it comes to the Price equation.

  6. The other key advice here is that there’s no substitute for actually knowing what you’re doing and why you’re doing it. You can’t put lipstick on a pig, and you can’t address potential objections if you don’t actually have a good answer to them. It’s the same with answering tough questions after talks: there’s no substitute for actually having a good answer.

  7. Great post and tips on a very common issue.
    I think this question doesn’t just come up in papers using novel methodology, but also when using a diversity of methods. If you’re using 2 very different methodologies (eg if you’re parameterizing a population model using results from an intricate mark-recapture analysis), then there’s a wider array of possible caveats to address (eg, you may get a reviewer specializing in your study system and one reviewer who’s into mark-recapture methods, but no one who works with population models).

    The other issue is that if you address a bunch of possible caveats, your paper is reaching a length limit (or is just getting too cluttery) and the reviews ask you to address a number of other concerns, how do you decide which of all the previous methodological explanations/justifications are important to keep, and which ones were of no concern and can safely be deleted or appendicized?

Leave a Comment

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.