A while back we invited you to ask us anything. Here’s our answer to the next question, from Nicole Knight. Which we finished answering and put in the queue yesterday, only to realize this was spectacularly poor timing. The fox knows many things–but not when NSF deadlines are.
What are the most common mistakes new (or not so new) scientists make when they start writing grant proposals?
Jeremy: I’m a bad person to ask because I’ve never served on a grant evaluation panel (I’d like to!). So I haven’t seen enough grants, or enough of how others react to the same grants I’ve seen.
Mike Kaspari has excellent advice here for NSF preproposals. Much of it generalizes to other project-based grants. Based on Mike’s experience, a common mistake is writing for specialists in your topic, rather than making your case to a broad audience. This mistake is manifested in various ways: overuse of jargon, lack of a clear broad question of general interest, lack of hypotheses…
Note that there’s a difference between a question and a pseudo-question, or a hypothesis and a pseudo-hypothesis. For instance, “X will affect Y” is a pseudo-hypothesis. That’s just taking a statistical alternative hypothesis and dressing it up as a scientific hypothesis.
For NSERC Discovery Grants, I and several folks I know have been dinged for proposing too many projects, addressing too wide a range of topics and described in insufficient detail. Reviewers often don’t fully trust you to know what you’re doing unless you’ve published a lot of the same sort of work. Even though NSERC Discovery Grants are supposed to describe research programs and so necessarily sacrifice methodological detail about specific projects. Also, people like me who have multiple independent lines of research often struggle to present those multiple lines of research as a single integrated research program. In future, I’m probably going to focus my NSERC Discovery Grant applications on two of my main lines of research, rather than three as I’ve done in the past. After all, once you get the money you’re still free to spend it on whatever research you want. A narrower Discovery Grant needn’t imply a narrower research program. And the amount of money you receive has nothing to do with the number or range of topics on which you propose to work (cost of research adjustments excepted, but they’re irrelevant for most people). Anecdata, of course, so your mileage may vary.
Brian: Single biggest mistake: not reading the call and what they’re looking for! The generic NSF calls (e.g. population and community ecology) aren’t super specific. But reading the fine details on broader impacts etc is important. But for more targeted calls from USDA and NASA as well as targeted NSF calls (e.g. ABI, Dimensions of Diversity, OPUS, etc), whether the proposal is responsive to the call or not (i.e. does all the things asked for) is a very common stumbling point. If they say you need at least three sites, you need at least 3 sites. If they say you need a scientific question and a technical approach you darn well better do both well and not mail one in. If this is a networking or collaborative grant, don’t throw together the social aspects of the proposal at the last minute. Get people on board months ahead and get letters of support well in advance.
There are some super obvious things that people fail on frequently. Have a clearly stated scientific question. And make sure it’s tractable. Overly ambitious is a common critique I’ve gotten on my proposals. Related – don’t try to answer too many questions – just for reasons of not having enough space to describe them even if it is logistically tractable. I think it is probably hard to have too much “preliminary” data (i.e. having none or little is a mistake).
Beyond that, I’m a little cynical. I’ve seen proposals that are carefully crafted like an art piece practically force the panel to award them, but I’ve seen plenty that were clearly last minute do well. I’ve seen ones that are very specific in their methods and ones that are very vague do well (although I increasingly think the latter is a better strategy). People look at me like I’ve got a horn growing out of my forehead, but I sincerely believe the US would do better to move to a lottery (or switch to a system that is more focused on evaluating researchers rather than projects and that gives more smaller awards). Pretty much not submitting and taking a chance is the only real mistake.
From Meg:
(I will preface my comments by saying that this is oriented towards NSF proposals (especially the core programs), as I have the most experience reviewing those.)
The most common way I’ve seen proposals fail is that they don’t set up a general question that is broadly interesting. This leads to statements like “lacking a strong conceptual basis”, which tends to sink a proposal. Even if the work is largely system-oriented (mine is), you need to lead off with the general, big picture question you’re addressing and need to explain how what you learn will generalize to other systems.
Regarding broader impacts: they are taken seriously by reviewers (not all of them, but many) and by program officers. Good broader impacts won’t get you a grant, but I’ve seen weak broader impacts move a proposal that was on the edge down to a lower category. One thing that is important with broader impacts is showing that you can actually do them. Simply saying something like “We will recruit students from underrepresented groups to work on this project”, without showing a track record of doing so (or at least having a concrete, promising plan for doing so), isn’t compelling. In addition, to be compelling, they have to be something beyond what is a standard part of your job. Training graduate students is a broader impact, but that on its own isn’t sufficient. Saying that you will present results at a meeting or in publications also isn’t sufficient. In my opinion, if you will be training grad students on a grant, that is worth mentioning in the broader impacts (but you need to have other things, too); saying you’ll publish/present results in a traditional manner (journal articles, presentations at scientific meetings) is probably not worth the space it takes. The exception is if you will bring undergrads to meetings and/or have publications with undergraduate coauthors. That is not typical, and generally is well received as a broader impact, so is worth mentioning, in my opinion.
Finally, for early career researchers, I think there’s often a thought that CAREER grants are easier to get. I was told by a program officer once that the data show that is not true. (It would be great if DEBrief did a post on this!) It was described to me as that there are two ways to fail on a CAREER proposal: if either the science or the education component is not strong, the proposal will not be funded. (Advice on the education component of CAREER proposals is here.)
Many folks see grant writing as an episodic, onerous task, starting about a month before a deadline. Pre-proposal deadlines in DEB that coincide with the end of Xmas break make the task seem even more Grinchian. One change in approach, both healthy and useful, is to treat all academic writing—grant proposals, paper writing, field notes, notes from your reading, research planning, significant emails–as more of a seamless whole that you dip into to help assemble whatever task-oriented writing is on your plate in a given day.
This motivates one to think of a grant proposal as a pitch piece assembled from notes, diagrams, paragraphs from ongoing introductions, that you have been assembling as a normal part of your scientific workflow, not something that has to spring from your brow, wet and new, over every Xmas holiday. That last burst of preparation is your opportunity to think clearly and expansively about the project that has been essentially “self-assembling” in the back of your mind over the course of the year.
Writing a proposal then is an exercise to clarify your thinking about something you care about, an opportunity to fantasize about work you could do (and, regardless, may be able to carve off some piece of to do with or without funding), as well as a lottery ticket for significant funding. It is also, like any component of your scientific writing, a piece of good prose that can be catabolized and repurposed for other proposals, paper introductions and methods, reviews for your students, etc.
Brian wins the award for the best advice: make it a lottery. Since it already is. Which is why most proposal advice is wrong – proposal reviewers rarely agree on anything. And following their recommendations usually gets you dinged by the next round of reviewers that seemingly would have preferred the original submission. It is maddening.
Although Meg is spot on about the need for general, basic science questions.
Finally, despite what outreach mavens on twitter will tell you, the broader impacts barely matters at all (except for a CAREER). Just be sure to include $ in your budget for your outreach swill.
Serious question: if you were going to make it a lottery, what would be the optimal design?
I suggest you want to limit entry; only proposals that are sufficiently good get tickets to the lottery. Don’t think you’d need a preproposal stage too (though whether you’d want the proposals that are evaluated for lottery entry to be 15 pages or 5 pages or what is a separate question…).
Should it be a weighted lottery? So that proposals that were better evaluated have better odds of being funded? Obviously, there’s a continuum here: at one end of the continuum is an unweighted lottery, at the other end is such heavy weight given to evaluation scores that those scores effectively dictate the lottery outcome and it’s not really a lottery any more.
To me one of the benefits of a lottery would be the low cost to distributing the money (for both researcher and agency). Its an interesting question about the entry criteria. You often hear that yes recent studies have shown poor ability of panels to predict publications and other outcomes, but at least panels are good at weeding out the truly bad. I once tried to follow up these citations and I haven’t found any rigorous study even showing panels/reviews can separate good from bad (OK maybe the bottom crazy 5% or so). I think we could do worse than say anybody who has published five papers is given an equal ticket.
Or as I hinted at above, we could just use existing evidence on the most efficient systems and give many small grants to successful researchers (with appropriate provisions for newcomers) and skip the lottery. Of course some countries already do this ..
But on the other hand, this last option doesn’t sound too different than a lottery with a threshold track record to enter except the odds of winning are much higher than the word lottery implies.
@Brian:
“You often hear that yes recent studies have shown poor ability of panels to predict publications and other outcomes, but at least panels are good at weeding out the truly bad.”
We’ve linked to data on this in the past (sorry, can’t find it now). IIRC, the general thrust is that panels do have some ability to predict publications and their citations when you look across all submitted grants. And that that ability is a continuous thing–it’s not solely a matter of weeding out a small fraction of truly terrible proposals that are hugely different from the others. But the ability of panels to predict variation in publications or citations among the small fraction of funded proposals is very modest if it’s there at all (as you’d expect, since in the panel’s view the funded proposals are all high quality and so very similar). This suggests to me that you would want to set a threshold panel score for lottery entry, but not use panel scores to weight the entrants (at least, not very much).
Of course, I’m imagining project-based proposals entering a lottery with a success rate of, say, 20-30%. Yes, you can think of an NSERC Discovery Grant-type system as a bit like a “lottery” for research programs (not individual projects; a key distinction) with a 75% success rate and correspondingly small average prizes. Whether to support research programs or individual projects gets into other issues, I think. They serve different purposes.
One could even imagine a parallel system of two “lotteries”: a research program lottery with small prizes and high odds, and a project-based lottery with big prizes and low odds. Personally, I’d like to see NSERC go that route, so long as the money for the new project-based system didn’t come from the Discovery Grant budget. And as long as I’m dreaming, I’d like my free pony to be a palomino…
I’ve seen papers that cite other papers claiming their citations suggest panels are good at differentiating really bad proposals. But when I follow the citations that’s not really what the other papers say (or they are not convincing). I don’t believe I’ve ever seen a convincing empirical study that panels are good at identifying terrible proposals. This I believe is more an intuition that gets repeated. Of course in fairness, it is probably hard to design a study that would test this. You’d have to accidentally or intentionally fund proposals rated as terrible and see what happens.
I’m trying to imagine the NSF officer testifying before congress as to why millions of dollars are being given away, and denied to, constituents on the basis of what some on the panel would inevitably refer to “dumb luck”. There are always opportunities to refine peer review; but, as Brian says, the kind of experiments needed to evaluate some of the above conjectures (i.e., deliberately funding proposals that score 3 “poors”) would not likely be admired for their rigor.
A less competitive system might have the unintended consequence of removing pressure on researchers to innovate. For me, knowing how competitive the proposal process is forces me to think really hard about what I can do that will excite reviewers, and then work hard again on proposal presentation. If funding decisions are purely based on luck (I’m skeptical), then I’m not using my time efficiently, but if other researchers respond to the competitive pressures like I do, it may benefit the field as a whole. Perhaps there are enough incentives to innovate elsewhere in the system (limited space in top journals) that removing this one wouldn’t matter too much, but I see some benefits in being forced to think hard about proposals.
Some of my best research has come from a careful innovative thinking for a grant proposal that then didn’t get funded but I did it anyway. So I get what you’re saying. But do you really think net net the highly competitive grant system improves the quality of science you do or detracts from it? I would have to be in the 2nd camp.
And I do think one can design better systems than an absolute lottery (it would look a lot like Canada’s NSERC 10 years ago when I was there). But I think it is important to make people think about how close we are to a lottery right now even if there are trappings of objectivity and merit on top of it. Studies have shown that panels have extremely poor predictive power and we all know, as John Bruno said, that a lot is luck of the draw of the reviewers. There’s also a lot of biases for and against research areas and styles, etc.I would make a general statement than anytime accept rates are 5% for anything, a whole lot of luck is going to enter the system.
And to enter this largely arbitrary system, most American researchers spend a colossal amount of time on grants – its not just the 15 page proposal – its the budgets with a fairly intense university bureaucracy around that, its the letters of support, the data management plan and postdoc mentorship plans and Fastlane quirks and COI lists and … A proposal is really 50 pages often times when its done. And it is close to all consuming for a month or so. I sure don’t get a months of worth of improved thinking out of it.
I would sure like to see an experiment on this: (1) A random set of n researchers from a pool of N get X dollars each to spend on research expenses as they please and (2) a different set of researchers are chosen from a different but same sized pool using standard review panels. The question is, does group 2 outperform group 1, and if so is it enough to justify the bureaucratic cost. My suspicion is that the benefit of the current system isn’t worth the cost.
The only quasi-experiment I’m aware of on this was the US government stimulus spending several years ago. It resulted in funding for a bunch of proposals that just missed getting funded under the usual system. As I recall, it turns out that those proposals were less productive on average than those funded under the regular system, though the difference wasn’t massive. Of course, this is quite far from the idealized experiment Matthew describes.
Re: the bureaucratic costs of the current system, note that some of them are difficult to quantify (especially if you want to measure them on the same scale as the benefits), and others are low (e.g., reviewers aren’t paid, program officers don’t make huge salaries, etc.). I mention this not to imply those costs are small, but just to say that if you were going to seriously try to do a cost-benefit analysis you’d have to try to quantify the costs and benefits on the same scale. I have no idea how to do that, though presumably someone does.
Pingback: Poll results: what are the biggest problems with the conduct of ecological research? | Dynamic Ecology