Note from Jeremy: This is a guest post from Peter Adler.
We’ve all been outraged by the nit-picky, irrelevant, or downright wrong-headed criticisms that show up in proposal reviews. How could they have rejected the proposal because they didn’t like the number of replicates? Or the details of the statistical model? Or because you didn’t cite the right references? Or even worse, because they didn’t read closely enough to see that you had cited those references? The answer may be that they didn’t reject your proposal for those stupid reasons. They rejected it because they just weren’t that into it.
While some reviewers are honest and self-aware enough to say that, and perceptive enough to explain why, most reviewers want to back up a negative review with a long list of specific, seemingly objective, criticisms. Sometimes they may not even be fully aware that they are just not that into your proposal. I know that if I’m simply not excited by the core ideas of a proposal I am reviewing, I am more likely to get annoyed by those stupid details. In contrast, if the ideas really grab me, I will be very forgiving of the details.
In order to revise and improve your rejected proposal, it is critical to recognize when the reviewers just weren’t that into your original submission. When that is the case, you need to ignore the details of their review and focus all your energy on making the ideas and the pitch more compelling. Altering details of the experimental design or citing additional references won’t make a difference the next time around. In fact, spending your time chasing after those details means less time and effort addressing the fundamental issue: how to better communicate the novelty, insight, and importance of the work.
The challenge is distinguishing the “just not into it” criticisms from legitimate, well-justified concerns. This is easiest when you know the reviewers are making a valid point. Right now I am conducting some pilot experiments because my last NSF proposal got hammered for not having enough (ok, any) preliminary data. While demands for preliminary data can be a “just not into it” complaint, we were proposing ambitious and expensive work in a new system, and the reviewers’ reaction was understandable. Recognizing the valid criticisms is also easy when many reviewers converge on the same point. Beyond that, it gets trickier. If a reviewer gives strong positive feedback on the conceptual goals and shows real understanding of the problem you are tackling, that may indicate that they “get it” and you should take to heart any accompanying negative feedback.
Could this same advice apply to interpreting manuscript reviews as well? My gut reaction is NO, but I’m not sure how well I can articulate why I feel that way. A practical reason is that revised manuscripts often will be returned to the original reviewers. Obviously, ignoring their comments is a bad strategy. But I think the main reason I see manuscript reviews so differently is that reviewers don’t have to be “so into it” to give a paper a positive review. Papers don’t require reviewers to exercise much imagination: the data has been collected, the analyses are complete, and the story, for better or worse, has an ending. It’s a relatively objective situation for a reviewer. The uncertainty and open-ended nature of proposals makes reviewing them a much different experience. To get funded, you must convince reviewers that your uncollected, unanalyzed data will lead to a story with a better ending than your competitors’ unfinished stories. You are asking the reviewer to imagine a rosy future based on limited information and then commit to your proposal over the others. As the word “proposal” implies, it is a form of courtship, which is why pop culture relationship advice is relevant.
p.s. First, apologies to author Greg Behrendt. Second, if I’m making any implicit claim to authority on proposal writing, it comes much more from my experience as a reviewer and panelist than my very mixed record of success in getting proposal funded.