Are US researchers slowly boiled frogs? – or thinking out of the box about the future of NSF

There is a belief that dropping a frog into hot water will cause it to react and immediately jump out, while putting it in a pan of cool water and slowly warming will cause the frog to never notice until it is boiled. Here in Maine you hear the same debate about how to cook a lobster. Whether the frog myth is true or not is debatable (although it is clearly sadistic). But it has become a common metaphor for failing to notice or respond to small incremental changes which when taken in the aggregate are terrible (fatal in the case of the frog). We seem to have a bit of the same thing happening with the primary basic science funding agency in the US (the National Science Foundation or NSF). In this piece I want to a) argue that due to macro trends not the fault of NSF, the agency and their researchers are in a frog-boiling scenario, and b) attempt to kick-start an out-of-the-box big picture discussion about what should be done about it (akin to the frog realizing it needs to take bold action and jump out of the pot).

But first, I’ve already said it, but let me repeat it to be abundantly clear. This is NOT a criticism of NSF. Every single program officer I’ve ever dealt with has been a highly dedicated and helpful professional (not to mention they are also researchers and one of us), and NSF regularly gets rated by government auditors as one of the most efficient and well run branches of the government. Instead, these trends are being driven by macro trends beyond the control of NSF (or of us researchers). I’m sure NSF is just as aware of and unhappy about these trends as I am. I expect they also are having discussions about what to do about it. I have not been privy to those discussions and have no idea whether NSF would welcome the discussion I am promoting here or not, but I feel like this blog, with its tradition of civility and rational thinking might be a useful forum.

Why researchers at NSF are like frogs being slowly boiled – the macro trends

I am going to focus just on the environmental biology division (DEB), although I don’t think the story differs much anywhere else. I haven’t always been able to obtain the data I would like to have, but I’m pretty confident that the big picture trends I am about to present are quite accurate even if details are slightly off. The core, graph, which I’ve seen in various versions of NSF presentations for a while (including those to justify the switch to the preproposal process) is this:

Trends in # of proposals submitted (green), # of proposals funded (blue), and success rate (red). This data is approximate (eyeball scanned from provided by NSF). Linear trend lines were then added.

Trends in # of proposals submitted (green), # of proposals funded (blue), and success rate (red). This data is approximate (eyeball scanned from provided by NSF). Linear trend lines were then added.

This graph confirms what NSF has been saying – the number of proposals submitted keeps going up without any sign of stopping while the number of proposals actually funded is flat (a function of NSF funding being flat – see below). The result is that the success rate (% of proposals funded) is dropping. But adding trends lends and extending them to 2020 is my own contribution. The trend in success rate is here actually an overestimate due to the stimulus year in 2009 which was left in. According to a naive, straight line trend, success rate will reach 0% somewhere between 2019 and 2020! Of course nobody believes it will reach 0% And the alternative approach combining the other two trend lines gives roughly 200 proposals funded out of 2000 for 10% in 2010. But the trend line is not doing a terrible job; when I plug in the 2013 number from DEB of 7.3%* it is not that far from the tend line (and is already below the 10% number). Nobody knows what the exact number will be, but I think you can make a pretty good case that 7.3% last year was on trend and the trend is going to continue going down. A few percent (2%?) by 2020 seems realistic. All of this is the result of inexorable logic. The core formula here is: TotalBudget$=NumberProposals*Accept%*GrantSize$

NumberProposals is increasingly rapidly. Although harder to come by data on, my sense is that GrantSize$ is roughly constant (at least after adjusting for inflation) with good spread but a median and mode right around $500,000. Maybe there is a saving grace in TotalBudget$? Nope:


Trends in NSF funding in constant 2012 dollars (data from Also see NSF’s own plot of the data at

NSF appears to have had four phases – exponential growth in the early days (1950-1963), flat from 1963-1980. Strong growth from 1980 to about 2003. And then close to flat (actually 1.7%/year over inflation) from 2003-2013 (again a stimulus peak in 2009). Note that the growth periods were both bipartisan (as was the flat period from 1963-1980). Positive growth rates aren’t terrible and congratulations to NSF for achieving this in the current political climate. But when pitted against the doubling in NumberProposals, it might as well be zero growth for our purposes. It is a mug’s game to try to guess what will happen next, but most close observers of US politics expect since the debate has shifted to a partisan divide about whether to spend money at all and a resignation that the sequester is here to stay are not looking for big changes in research funding to come out of Congress anytime soon (see this editorial in Nature). So I am going to treat TotalBudget$ as flat line and beyond the control of NSF and researchers.

The number that probably deserves the most attention is NumberProposals. Why is this going up so quickly? I don’t know of hard data on this. There is obviously a self-reinforcing trend – if reject rates are high, I will submit more grant applications to be sure of getting a grant. But this only explains why the slope accelerates – it is not an explanation for why the initial trend is up. And there is certainly a red-queen effect. But in the end I suspect this is some combination of two factors: 1) the ever tighter job market (see this for a frightening graph on the ever widening gap between academic jobs and PhDs) which has led to ever higher expectations for tenure. To put it bluntly, places that 20 years ago didn’t/couldn’t expect grants from junior faculty to get tenure now can place that expectation because of the competition. and 2) as states bow out of the funding of their universities (and as private universities are still recovering from the stock crash), indirect money looks increasingly like a path out of financial difficulties. Obviously #1 (supply) and #2 (demand) for grant writing faculty reinforce each other.

So to summarize: TotalBudget$=NumberProposals*Accept%*GrantSize$. TotalBudget$ is more or less flat for the last decade and foreseeable future. NumberProposals is trending up at a good clip due to exogenous forces for the foreseeable future (barring some limits placed by NSF on number or proposals). So far GrantSize$ has been constant. This has meant Accept% is the only variable to counterbalance increasing NumberProposals. But Accept% is going to get ridiculously low in the very near future (if we’re not there already!). Part of the point of this post is maybe we need to put GrantSize$ and NumberProposals on the table too.

Some salient facts for a discussion of what to do

In the next section I will list some possible solutions, and hopefully readers will contribute more, but first I want to highlight two very salient results of metaresearch (research about research).

  1. Review panels are not very good at predicting which proposals will lead to the most successful outcomes. Some claim that review panels are at least good at separating good from bad at a coarse grain, although I am not even convinced of that. But two recent studies showed that panel rankings effectively have no predictive power of variables like number of papers, number of citations, citations of best paper! One study was done in the NIH cardiovascular panel and the other was done in our very own DEB Population and Evolutionary Processes panel by NSF program officers Sam Scheiner and Lynnette Bouchie. They found that the r2 between panel rank and various outcomes was between 0.01 and 0.10 (1-10% of variance explained) and were not significantly different than zero (and got worse when budget size, which was an outcome of ranking, was controlled for). UPDATE: as noted by author Sam Scheiner below in the comments – this applies only to the 30% of projects that were funded. Now traditional bibliometrics are not perfect but given that they looked at 3 metrics and impact factor was not one of them, I think the results are pretty robust.
  2. Research outcomes are sublinear with award size. Production does increase with award size, but best available (but still not conclusive) evidence from Fortin and Currie 2013 suggests that there are decreasing returns (a plot of research production vs. award size is an increasing, decelerating curve (e.g. like a Type II functional response).This means giving an extra $100,000 to somebody with a $1,000,000 buys less productivity increase then giving an extra $100,000 to somebody with $200,000 (or obviously to somebody with $0).

Possible solutions

Just to repeat this is not a criticism of NSF. The exogenous drivers are beyond anybody’s control and simple budgetary math drives the rest. There is no simple or obvious answer. I certainly don’t have the answer. I just want to enumerate possibilities.

  1.  Do nothing – low Accept% is OK – This is the business as usual scenario. Don’t make any drastic changes and just let the acceptance rate continue to drop to very close to zero. I actually think this might be the worst choice. Very low acceptance rates greatly increase the amount of randomness involved. They also ironically bias the panels to be conservative and select safe research (maybe even mediocre research) that won’t waste one of the precious awards, which is not good for the future of science. I recall being part of a discussion for an editorial board for a major journal where we all agreed the optimal accept rate was around 25-30%. Anything higher and you’re not selective. Anything lower and you start falling into traps of randomness and excessive caution. I think this is probably about the right number for grants too. Note that we are at about 1/4 of this rate. I personally don’t even consider the current acceptance rate of 7% acceptable but I cannot imagine anybody considers the rates of 1-2% that we’re headed towards to be acceptable. The other approaches all have problems too, but most of them are not as big as this one in my opinion.
  2. Drive down NumberProposals via applicant restrictions on career stage – You could only allow associate and full professors to apply on the basis they have the experience to make best use of the money. Alternatively you could only allow assistant professors to apply on the argument they are most cutting edge and most in need of establishing research programs. Arguably there is already a bias towards more senior researchers (although DEB numbers suggest not). But I don’t think this is a viable choice. You cannot tell an entire career stage they cannot get grants.
  3. Drive down NumberProposals via applicant restrictions on prior results – A number of studies have shown that nations that award grants based on personal records of the researcher do better than nations that award grants based on projects. You could limit those allowed to apply to those who have been productive in the recent past (15 papers in the last 5 years?). This of course biases against junior scientists although it places them all on an equal footing and gives them the power to become grant eligible. It probably also lops off the pressure from administrators in less research-intensive schools to start dreaming of a slice of the NSF indirect pie (while still allowing individual research productive researchers at those institutions to apply)..
  4. Drive down NumberProposals via lottery – Why not let the outcome be driven by random chance. This has the virtue of honesty (see fact #1 above).It also has the virtue of removing the stigma from not having a grant if people can’t be blamed for it. This would especially apply to tenure committees evaluating faculty by whether they have won the current, less acknowledged, NSF lottery
  5. Drive down NumberProposals via limitations on number of awarded grants (“sharing principals”) – You could also say that if you’ve had a grant in the last 5 years, you cannot apply again. This would lead to a more even distribution of funding across researchers.
  6. Decrease GrantSize$  – The one nobody wants to touch is maybe its time to stop giving out average grants of $500,000. Fact #2 strongly argues for this approach. Giving $50,000 to 10 people is almost guaranteed to go further than $500,000 to one person. It gets over that basic hump of having enough money to get into the field. It doesn’t have much room for summer salaries (or postdocs – postdoc funding would have to be addressed in a differnet fashion), but it would rapidly pump up the accept rate into reasonable levels and almost certainly buy more total research (and get universities to break their addiction to indirects). Note that this probably wouldn’t work alone without some other restriction on the number of grants one person can apply for, or everybody will just apply for 10x as many grants which would waste everybody’s time.

What do you think NSF should do? Vote by choosing up to three choices of how you think NSF should deal with the declining acceptance rates (and feel free to add more ideas in the comments):

I am really curious to see which approach(es) people prefer. I will save my own opinions for a comment after most votes have come in. But I definitely think it is time for the frogs (us) to jump out of the pot and take a different direction!

* Note that 7.3% is across all proposals to DEB. The blog post implies that the rates are lower on the core grants and higher on the non-core grants like OPUS, RCN, etc. They don’t give enough data to figure this out, but if I had to guess the core grants are funded a bit below 5% and the non-core grants are closer to 10%.

This entry was posted in Issues by Brian McGill. Bookmark the permalink.

About Brian McGill

I am a macroecologist at the University of Maine. I study how human-caused global change (especially global warming and land cover change) affect communities, biodiversity and our global ecology.

59 thoughts on “Are US researchers slowly boiled frogs? – or thinking out of the box about the future of NSF

  1. Hi Brian. Great post. Would be curious if you can edit the poll results to include how many people have taken it. You know, so that readers can judge the statistical validity of the results…

    • Hi – I don’t think I can edit the poll once its started, but I will certainly update the numbers at the end of the 1st and 2nd days in the comments.

      • Hmm…I’m not sure if the PollDaddy polls actually record number of respondents.

        Re: statistical validity, we’re not randomly sampling from any well-defined statistical population here, I don’t think, at least not one of any particular interest. The population is basically “avid readers of this blog, plus some people who happen to get pointed to this post via social media”. 🙂

  2. In Canada, NSERC Discovery Grants for Ecology/Evolutionary Biology average about $20-25k/year, but a success rate of ~50% for first-time applicants, and 80% for renewals ( Unlike many places in the US, though, none of this goes towards PI salary.

    There are obviously challenges with working in a system where this is the average grant (and would be far more in a system transitioning from $500k, or even $200k). Canadian PIs tend to have to apply for more smaller grants (though with a higher success rate). This makes it very challenging to hire postdocs, though, without a large mega-grant. But Canadian researchers generally punch above their weight in terms of publications (publishing about 16% as much as the US in gross terms, despite being only ~10% of the population –

    And I’ve argued before that spreading the funding around is likely a better strategy –

    • Some further commentary on the Canadian system and related issues:

      Alex is correct to note that NSF can’t just go to the full NSERC system, which would involve much more than just reducing average grant size. NSERC Discovery Grants fund research programs, not individual projects. So you can only hold one NSERC Discovery Grant at a time, because by definition each individual only has one research program. Canadian academic positions are 12 month positions, so there’s no need for summer salary in Canada (which NSERC won’t pay for PIs anyway). Overhead is handled differently in Canada-it’s not part of individual grants, so your whole NSERC grant is real money you can spend on science. Canadian grad students are mostly funded as TAs (or else by their own scholarships or fellowships), with PIs only paying their summer salaries from grant funds. But yes, NSF presumably has flexibility to go some way towards the NSERC system by, e.g., reducing average grant size and possibly by limiting the number of NSF grants people can hold at once.

      One big benefit of the NSERC system is that is drastically reduces the amount of time one has to spend chasing money. I write one 5-page grant every 5 years to provide a baseline level of funding for my lab. So instead of constantly writing and revising grants, I can do science, write papers, etc. Even blog. 🙂

      I love the Canadian system myself, and I’m happy to make the trade-offs it forces you to make (though if I was a Canadian looking for a postdoc I might well think differently…). Anecdotally, it’s my impression that many US researchers would prefer the Canadian system. Every year at ESA I ask my US colleagues “How low do NSF success rates have to drop before the Canadian system starts to look good to you?” The most common answer is “They’ve already dropped that low.” Although I did have one person say “They can never drop that low.” The person who said that compared getting an NSF grant to a hit of crack cocaine–having one is so great that once you’ve had one, you’ll do anything and put up with anything to get another one. 🙂

      One can also imagine hybrid systems. Andrew Hendry once argued to me that NSERC ought to cut everyone’s Discovery Grants by 10% in order to fund a new NSF-style program of big project-based grants. That would give Canadian PIs a chance to do the sorts of science one can only do with an NSF-type grant. And even if the success rate for the new NSF-type grants was 0.01%, that’d be ok because most investigators would still have their Discovery Grants as a backstop to keep their research programs ticking along. Andrew’s theory was that everybody in Canada would be in favor of this, because everyone would be cocky enough to think that they’d be one of the lucky few (and it would be very few) who’d get one of the big new NSF-type grants. Andrew was wrong about that–personally, I don’t want to pony up 10% of my Discovery Grant for what would essentially be a lottery ticket (and I say that as someone who only has an average-sized Discovery Grant). But I do think there’s a good case to be made for the sort of hybrid system Andrew suggests.

      Irrelevant p.s.: I predict that, if these comments are noticed by a certain prominent science blogger, they will draw much eye rolling and snark to the effect that NSF can’t possibly move even one little step in the direction of the Canadian system. Supposedly because, if NSF did move in that direction, all biology departments would stop hiring people working in NSF-funded fields because NSF grants wouldn’t pay enough in overhead to make them worth pursuing. Or something (???) Anyway, I found trying to discuss this topic with said prominent science blogger sufficiently exasperating that I eventually just threw up my hands: 🙂

    • Hi Alex – having worked in both the US and Canadian funding systems, there are of course pros and cons of each.

      First thing I would note is it is impossible to compare grant size in US and Canda. Most basically, Canadians always give per year amounts and US researchers give total grant (which is typically 3 years). Then you take off the fact that in the US you have typically 50% indirect, and 1-2 months of summer salary which you don’t need in Canada (indirect is funded by feds and provincial government, contracts are 12 month). THings start to look at lot closer to each other. Then you add in that Canada still has more (albeit declining) direct funding to students and postdocs via fellowships, and the difference is not so big. Indeed the biggest difference is that in Canada it is 50% accept rate for 5 pages every 5 years vs in the US it is ~5% accept rate for 15 pages several times a year. On that front Canada is a clear win. I also like the fund the researcher instead of fund the project model on the whole.

      And I think you & I are in general agreement about my salient point #2 above and its implications

  3. Wouldn’t decreasing the grant size not also lead to fewer Postdocs being hired, so that more earlier-career scientists will put in more grants on their own? And would not smaller grant sizes bias ecology even more than now towards short-term studies, simpler experimental designs, lab experiments, studies in ecosystems next door like the campus pond etc? That one solution seems especially tricky. One solution may be to increase basic funding of universities and institutes while reduce funding via grants, i.e. the funding agencies budgets.

    • “Wouldn’t decreasing the grant size not also lead to fewer Postdocs being hired”

      Yes. Hence Brian’s parenthetical remark about how you’d need to have some kind of separate program for postdocs. Of course, NSERC in Canada does have such a program–but success rates for that program are very low. Bottom line, at some level all of this is a zero-sum game. Money you spend on one thing is money you can’t spend on something else. Allocation constraints and opportunity costs are inescapable.

    • As Jeremy noted, I do recognize the problem for postdocs. And it would be important to address.

      And you definitely need to have field specific variability in terms of research dollars needed by field. Of course you can have different mixes of grants to – Canada has the discovery which is single researcher and smaller and then they have other programs like Strategic and Infrastructure which are very big – allowing for some real large projects in Canada.

  4. Getting large organizations to change is often very hard. In addition to trying to change the system, I wonder if there are ideas of ways we can organize ourselves or change the way we interact with the system to help alleviate the problems associated with decreased funding rates. For example, creating a cultural disincentives for researchers who ASK or are awarded more than $200K. Any other ideas?

    • “For example, creating a cultural disincentives for researchers who ASK or are awarded more than $200K. ”

      Hmm, can’t see how to make that happen, Don, and I don’t think it should happen. Funding agency rules quite rightly oblige reviewers and panelists to evaluate the proposed science, and the proposed budget, on the merits. So if somebody proposes good work, well, it’s good work and it costs what it costs. I don’t think it’s right to mark people down for proposing good work that just happens to be expensive, or for accepting (!) large grants they’ve been awarded.

      People should be marked down for padding their budgets, obviously–for asking for more than they need to do the good work they’ve proposed. But at least in principle, that already should be marked down for that.

      • A) “But at least in principle, that already should be marked down for that.” I’m not sure degree of budget padding is public knowledge, which is required to drive the (shaky but provided for example) idea of cultural disincentives among researchers.
        B) “Call for Discussion” Fail. The post called for “out-of-the-box big picture discussion” ideas and you responded to shoot holes at my flimsy example without addressing (or even considering?) the broader flip-the-way-we-might-think-about-responding, out-of-the-box point of Bottom-Up responses to the problem in addition to the Top-Down solutions proposed by Brian.

      • @Don S.

        “Call for Discussion” Fail”

        Sorry, I must’ve missed where Brian said that this thread was supposed to be restricted to pure brainstorming, and that we weren’t supposed to attempt to further develop or evaluate any of the ideas suggested. EDIT: I apologized below, but wanted to do so here as well. This was out of line and unconstructive of me. Apologies both to Don and to our other readers.

        I think in practice your proposal isn’t a way to change incentives, it’s just a call for people to act against the existing incentives. And in general, I think there’s good reason to be skeptical of the effectiveness of any call for people to act against the incentives they face. But I’m happy to be convinced otherwise. And so by all means, keep following your line of thought: Without changing NSF rules or the other explicit incentives researchers face, how would we create, from the bottom up, a cultural norm that one should apply for much smaller grants than one is permitted to apply for, and that one should decline larger grants if one receives them? Honest question.

        I won’t comment further until you ask me to.

    • I think this is an interesting question. While I cannot see the culture building to the point of punishing “selfish” researchers asking for large grants. I do think your point about how NSF is just going to change is also on target. I think it is going to be some combination of bottom up and top down push for change. Hence this post as my contribution to a bit of bottom up push. Hopefully there is a lot more in the future.

      I don’t have any doubt we are headed to 2-4% funding rates in the near future and that is completely unviable in my opinion.

      • Jeremy,
        The tone of your response suggests that my initial response to you was overly harsh. I’m sorry about that.

      • @Don S.

        No worries, and my apologies as well, I shouldn’t have read you as intending a harsh meaning.

        To get back to the substantive issues: I’m trying to think of other cases in science where norms of professional practice have changed in a bottom-up fashion, into the teeth of explicit incentives (as opposed to new norms merely replacing old norms). Trying to think of some historical examples that could serve as a model here. I’m not coming up with anything…

        Datasharing is a new norm that’s developed pretty rapidly from the bottom up. But I think it was mostly up against old norms rather than explicit incentives. It’s not like sharing data could put your career at serious risk, the way forgoing a significant amount of grant money could.

        There are those who are trying to get scientists to adopt a norm that it’s unethical to publish in journals published by for-profit publishers, or in any journal that isn’t author-pays open access. They haven’t gotten very far, as best I can tell, because they’re up against not just existing professional norms, but strong incentives. (Author-pays open access publishing has of course taken off, but for other reasons besides this particular normative argument).

        But perhaps I’m just not thinking of some examples here?

    • That’s a great post with a lot of data expanding on Fortin & Currie (including a sample plot).

      While you can’t do operations research type portfolio optimization without knowing the exact shapes of the curves, and we don’t and never will know those exact shapes, the general principle that funding more researchers with fewer per/researcher dollars (same total funding) is going to give you more research impact is hard to argue with however. There is probably some lower threshold below which that is no longer true. But whatever that treshhold is, the US is way above that lower threshold and you’d be hard pressed to argue that Canada is at that threshold either – which tells me that threshold must be at least as low as the $5-$10K/year range and maybe lower. We’re not in that ball park yet!

  5. Regarding downgrading grant sizes, I’m curious how that would affect the nature of the research that gets funded. Some research, especially coming from large, collaborative projects, requires large amounts of money to get off the ground (an extreme example is the Large Hadron Collider).

    What about weighting grant proposal size by the number of individuals involved in the grant? You would still have to cap the number of proposals somebody could put their name to, but this would give room for broader array of projects, some highly collaborative and expensive and some more individualized and less expensive.

    • I do think you have a point about maybe wanting to incentive large collaborative grants to some degree (possibly by making the typical individual grants smaller). When you look at the distribution of number of senior personnel on NSF grants they are still largely pretty small teams on average.

      The Large Hadron Collider is a whole different ball of wax. The US is currently trying that with NEON in ecology. I think there needs to be a compelling demonstration that a fundamentally important question can only be answered by mega-big science before I would follow the put all your eggs in one basket approach (just from a risk management/portfolio approach perspective)

    • NSERC in Canada has separate programs for big collaborative projects, so that’s one way to address this. Though of course you still have to decide how much money to allocate to the big collaborative grants program vs. the individual investigator grants. NSERC also used to have a separate program for purchasing big expensive pieces of equipment.

  6. Trigger warning: I’m about to say something a lot of readers probably won’t like.

    It’s NSF’s job to get the most bang for the buck that they can in terms of getting good science, right? So if the current system gets a lot of bang for the buck compared to the alternatives, isn’t NSF obliged to stick with the current system?

    That is, it’s not NSF’s job to care about whether low success rates make it hard for young investigators to establish their labs or get tenure, is it? Or about whether low success rates make it hard for people to sustain long-term research projects? Or about whether low success rates might cause some people who might otherwise choose a career in academic science to choose some other career path? Or about whether its funding way more graduate students than can ever expect to get academic jobs? At least, NSF’s job is only care about those things indirectly, right? NSF should only care about them insofar as they impact the ability of NSF to purchase good science with the money it’s been budgeted by Congress? (Someone please correct me if I’m ignorant about NSF’s legal mandate, which I might well be)

    Put another way (and assuming I haven’t misunderstood NSF’s mandate, which I may have), to argue for changes to the NSF system, you have to argue that those changes would be good for science as a whole, measured according to the sorts of outputs NSF should care about. I think you can make that argument. As Brian notes, you can point to data indicating diminishing returns to giving additional money to people to have some already, and you can argue for spreading the wealth on the grounds that breakthroughs are unpredictable so it doesn’t make sense to put lots of eggs in a few baskets. But that’s different than arguing for smaller grants with higher success rates because, say, that will make it easier for new investigators to establish their labs. And those arguments in favor of going to an NSERC system aren’t slam dunks. For instance, one can argue for a system of big (and therefore hard-to-get) grants on the grounds that only big grants support the collection of big sample sizes. Giving everybody small grants is just a recipe for funding a bunch of underpowered studies:

    As much as I like the Canadian system myself, I’m not sure there’s a slam-dunk case for its adoption over other systems.

    • Jeremy – your whole comment is predicated on the proposition that the current system at NSF is using maximizes bang for the buck. That is the question I am raising. I think it is an open question that it is currently maximized for bang-for the buck. And I would argue that there are gross inefficiencies (as already mentioned such as a propensity to become more conservative in funding, also more cliquish, and more likely to fund researchers with many grants) that directly tie research efficiency to not having ridiculously low funding rates.

      Additionally in the US, with a significant portion of university revenues coming from federal granting agencies, it is not as easy to divorce research effectiveness from overall health of the university ecosystem as it is in say Canada where the latter is the sole function and funding requirement of the government through funds giving directly to the universities. Indeed some non-trivial fraction of the STEM university professors in the US (30%?????) exist only because of the money shell games that are indirect money. It is why the sciences have grown while the humanities have shrunk as US administrators chase indirect. It is a misnomer to say that in the current NSF world NSF is only funding small research projects over and above the teaching functions already funded in other ways and thus only looking to maximize research bang for the buck in a way completely divorced from the other goings on at a university.

    • The amount of work it takes to consistently attempt to get NSF funding, itself, limits productivity throughout the system. So many people spend time writing grants – with the preproposal system, a little less so, but if you were to ask PIs how much time they spend on grant writing instead of getting actual research done and published, they’d say that grant writing takes up a lot of their time. I bet NIH-field researchers feel this even more strongly, as they often need these grants to pay for PI salary as well as lab personnel.

      So to get more bang for the buck, then maybe cutting back dramatically on the effort (but not skill or productivity) it takes to get funded would be productive, and also cutting back on award size in a dramatic way because there are declining returns with award size. There are so many researchers out there who will do remarkably things with an NSERC-size award but they can’t or don’t choose to spend their lives chasing the small chance of an NSF award. (And, many work in small ponds, of course you know I’d go there.) You want to really maximize bang for the buck in research. Fund a lot more people with less money. Right now the funding system is filled with a mix of lottery winners and the deserving yet unfunded. And I say this as being a lucky consistent lottery winner. I don’t know if fairness matters, but distributing resources to people who have a shown ability to be successful in research will maximize research, including tapping into a lot of unfulfilled potential.

      • Hi Terry – all great points as usual. I can’t believe I forgot to mention the waste of time as a cost of low acceptance rates. I think it is easy to forgot when you are in another country. Most people I know consider all “free time” (non- teaching, supervising existing students etc – in short all time that would otherwise go to research) for a month to be the minimum for a grant. When you consider low accept rates drive many people to do this 2-3 times/year and still have aggregate funding rates around 10-20%, this is clearly an enormous hit to research productivity.

    • Sounds scary…

      I hope that you agree that always trying to maximize bang for the buck is not a good guiding principle for a society. For me, decreasing the level of strugling and insecurity faced by people who are trying to do science is a better argument for changes in the system than maximing scientific output per dollar.

      In a certain sense, producing 10x more PhDs than there are jobs in academia and then letting the best 10% compete for limited funding (i.e., even more selection) can be good for science as a whole because only the very best make it. However, most people do not want to became martyrs. Organizing the system in a way that reduces the number of unhappy, strugling people would be worth on its own. It sounds idealistic, I know. However, ignoring these issues would be cynical.

      • I honestly don’t mean to sound cynical or heartless, Jan. I guess I’d describe myself as a realist (which maybe is just another word for “cynic”, I admit…). Realistically, I just don’t really see that NSF can or should try to ensure that the availability of long-term academic careers is well-matched to the demand for them.

        I do think one can make an argument that US science would be better off and more productive if it were organized so that there were more permanent or long-term technical support staff (technicians, lab managers, long-term research associates, etc.), and fewer grad students. And perhaps there are tweaks NSF could make to its funding policies that might help bring such a world into being. But I do think the most compelling argument for doing that would be an argument based on the benefits to science as a whole.

        After all, there are lots of career paths that many more people want to pursue than are able to pursue them. And in some of those careers, the only reason there aren’t lots of unhappy, struggling people trying to pursue them is because many people are prevented from even trying to pursue them. Think of how medical school admissions works; most people who want to be doctors are forced to pick another career path very early on–do we want science to work that way?

        In general, I’m a little hesitant about trying to engage in top-down engineering to ensure that the number of people who want to pursue a particular career path is well matched to the demand for that career. I’d prefer to let people vote with their feet in terms of their career choice, and to provide some sort of economy wide backstop to everyone so that everyone has some measure of protection from the downside risk of career choices that don’t pan out (e.g., a universal basic income: But of course, in saying that I’ve probably just revealed myself to be even more idealistic than you (or maybe just differently idealistic). 🙂

        Worth keeping in mind that the unemployment rate among people with graduate degrees is quite low, much lower than for people without them (sorry no time to look for link to data now…) Obviously, in part that’s for reasons that are independent of the degrees themselves (the sort of people who go to grad school often have other strengths and advantages that would help them find employment even if they didn’t go to grad school). And many people with graduate degrees aren’t employed in a way that makes any use of their degrees. But still, it is worth keeping in mind.

        These are really big issues. I don’t pretend to any great wisdom here, just tossing out my own two cents.

  7. “But two recent studies showed that panel rankings effectively have no predictive power of variables like number of papers, number of citations, citations of best paper!”

    Just to be clear, as this was not in the way the original text was written, the lack of predictive power was _among the proposals in the top 30% that were funded_. The panels are able to do the coarse filter.

    • Hi Sam – thanks for your important qualification. I did not mean to imply otherwise and apologize if I was too brief in summarizing your result. Your analysis of papers coming directly from a project could not possibly apply to those not funded. UPDATE – I have noted this qualification more directly up in the main text as well.

      I do want to push back a little on the “The panels are able to do the coarse filter”. As you noted, this was not the subject of your paper. The one citation on this point I can find in your paper is to Bornmann etal 2008 ( However this paper differs in: a) focusing on awards based on overall researcher productivity which I too argued above is a bit easier to assess than predicted outcome of a particular project, b) as the authors acknowledge suffers from the fact that the award itself could have improved productivity (more time for research, more students doing research, etc). and c) it lumps awarded vs all non-awarded. I’d be much more interested in seeing productivity vs awarded vs just missed cut-off vs vs not-even-close (or as you did productivity vs ranking). I suspect there is a bottom 15-20% that drags the non-awarded’s down considerably.

      Indeed its not clear to me how one could ever accurately assess panel rankings vs the potential of unfunded projects.

      I don’t doubt that there is a non-trivial number of obviously and fundamentally flawed proposals that panels can differentiate. But personally I doubt that 70% fall in that category from the time or your study and I am quite certain that 92.7% don’t fall in that category from the current funding rates.

      I’d be curious if you know of other rigorous work that shows exactly what level of quality review panels can differentiate? I’d also be curious (might not be able to say given your position but obviously you’re in a good position to know) exactly what percentage you think is blatantly flawed vs what percent are pretty close in quality to those funded – in short the zone in which you think the results of your findings apply?

      Thanks for dropping by.

  8. FYI – as of this afternoon East Coast US time, there are >200 responses to the poll at this point. So while I echo Jeremy’s concerns above that this is not a scientifically designed sample – almost certainly non-random in several ways, I do not think small sample size is a significant inhibition to interpreting it at this point – 200 is small for cross tabs but pretty good size for just ranking options – especially when the ranking is so clear cut as this.

  9. When rates get down to 2%, will anybody be submitting? Therein lies either an irony or a paradox. When rates get to a low enough percentage, then I imagine lots of reasonable people will stop submitting. People who have the golden pen still get funded even in the current climate, but at 3%, even those with a magical grant writing gift will still have a very hard time. I wonder if NSF knows, or has done some research into, the critical point at which submissions actually slow down because of poor funding and rates hit an asymptote? Or is 7-10% an where the asymptote lives? I think some NIH programs have been there for a while.

  10. Brian, a very interesting post, thanks. A few comments:

    1) A dollar limit, which I might be in favor of, actually hurts those of us in departments where we don’t teach large classes like inter Bio and hence don’t have lots of TA support to offer (e.g., Wildlife Fisheries, Forestry departments in the US) . In our department, >75% of the students are contract or grant supported, meaning without a grant/contract it’s very, very difficult to get students in the lab. Each PhD student supported for 5 years will add ~175-200K to a grant by the time you add tuition, a competitive stipend, fringe benefits, and IDC.

    Of course, this problem would go away if there were more NSF fellowships or universities picked up the tab for students, but I don’t see this happening, so a $ limit means PIs in departments with more student support will have an advantage over those in departments without.

    2) My experience outside of the US has been primarily with the Brazilian system, which has both federal and state funding agencies equivalent to NSF. Most of these allow PIs to hold only one grant at a time from different grant categories (i.e., the equivalents of core programs grant, cross-cutting larger ones like PIRE or DOB, etc.). The result is what you would expect – more people have grants. It also means more strategizing about your research program and grant applications and less ability to take advantage of opportunities for collaboration – I’ve had collaborators say “Sorry, I can’t submit a grant to that competition to follow up on our collaboration because I have to submit one to support my core research”. Still, this is the approach i favor most – a per PI limit on core program grants.

    3) I was surprised not to see two other possibilities on your list I’ve seen floated quite frequently:

    a) funding some of the pre-proposals without requiring a full proposal. I agree with the post below – some of the pre proposals are just so obviously superior to others it would be nice to have a mechanisms to fund them right away.


    b) eliminating pre-proposals and going back to full proposals, but making the proposals shorter (perhaps by not requiring so much detail on experiments, many if not most of which are modified on the car ride to the field sites). This wouldn’t solve the funding problem, but if you are writing grants that are 8 pages long instead of 15 it might at least free up the PIs to write more grants or help alleviate the problem in finding reviewers. Off topic, I suppose, but worth throwing in the mix?

    4) Try the YOP poll plugin. It is very easy to use and customizable.

    • Thanks Emilio –

      I agree that cutting grant size hurts grad students and postdocs. There would have to be an alternative fix. Ultimately though, I don’t think a system that puts even more hands in the power of a single professor and also attaches lots of indirect costs to it is the best way to fund students. Transferring some of this money to directly fund fellowships for PhD postdocs is I think highly preferable as it gives the students more power/flexibility and it also saves money (no indirects).

      WRT #2 – it is good to hear from somebody in a system that limits # of proposals/researcher – I guess the Canadian system does this indirectly (only one discovery grant per researcher), but what you describe in Brazil is closer to what this would look like in the US.

      WRT #3 – I agree with funding some of the preproposals. I would also agree with shortening applications in general. But in my post I was mostly staying away from the preproposal or no debate – it doesn’t change the basic parameters of my argument – overall funding is 7.3% and headed down.


      • Brian, Though I also wish we didn’t tie student support to grants, t wasn’t actually saying cutting grant size hurts students, I was saying it hurts **the PIs**. For many of us no grant means we likely can’t recruit a student, and that translates into lower overall productivity, fewer opportunities collaboration via co-advising or working with students after they’ve graduated, and smaller professional networks for the students we do have. That means that until there is internal consistency in the way we award graduate student support within a university, a $ limit on grants will have a large and detrimental impact on the faculty in some departments. With a $ limit not only would NSF need to change the way it operates, but so would every university in the country – otherwise we’ve replaced on problem with another one.

        Still, I agree with your broader point – bundling student and postdoc support with grants is less than ideal.

        WRT 3: Yep, does nothing to solve the problem, only streamline the process of getting to 7.3%.

      • Your basic point about the world looking different and the need for grants higher in places without TAships is definitely true. Throughout my career I’ve bounced between biology (or EEB) departments that teach BIO 101 and have lots of TAships and wildlife or natural resource departments that have almost no TAships. Definitely a big change.

    • Re proposal length and possibly funding some preoproposals, Discovery Grant proposals in Canada are 5 pages, in which you have to describe your research program (all planned lines of work) for the next 5 years. That’s the proposal; there is no preproposal stage. Rather different beasts than NSF preproposals, obviously. But it certainly does place the focus on the big ideas rather than on methodological details!

      Re: poll plugins, we’re hosted for free by WordPress and so can’t use just any old plugin. In the past we’ve set up Google polls when we want “proper” polls. 🙂

  11. I am curious about the 9 month vs. 12 month salaries of academics in Canada and the US. Are we comparing apples to apples? Is it true that the 12 month salaries of scientists in Canada = 4/3 * 9 month salaries of scientists in USA?

      • It most likely is a separate issue. I know that salaries vary by more than 33% throughout the US. However, I wonder if Americans are seeking out summer salary because we need to or because we can.

    • Jeremy – I don’t think they’re a separate issue at all. US reseachers are not paid an equivalent full salary compared to most of the rest of the world unless they get 2 months of summer salary (NSF only allows 2 months on the presumption people actually take vacation) Having moved back and forth across the border several times – I think most administrators take a US salary and multiply by 11/9 to estimate an equivalent 12-month salary. Of course to the individual who sees the 12 months as 100% guaranteed and the 11/9 increasingly highly risky to obtain instead of just the 9/9 the evaluation looks different. That said, of course, cost of living, etc make it hard to truly compare salaries across borders. But it certainly drives a big piece of the push to get grants in the US and it is at this point to some degree wired into the pay levels and expectations.

      • And remember it’s more complicated than that – many of us at land grant universities in the US do have 12 month appointments.

  12. why not limit overhead rates that universities charge? my university has gone from 48% to 60% in the last ten years. at the same time i have less support. this is a pure money grab. by limiting overhead rates (25%?), NSF could make their money go further.

    • If I could figure out a way to do it, I would agree with you. Overhead rates are definitely creeping up everywhere. 10 years ago when I first started applying for grants most universities were (just) under 50% and now most are over 50%. And as I’ve said elsewhere (, I do think overhead is currently incentivizing counter-productive behaviors much more than it is incentivizing productive behaviors (e.g. getting grants is now valued more than doing good research). But this is one of the bedrocks of funding universities in the US and I would hate to pull it out without an alternative (especially so recently after one of the other bedrocks – state governments has been pulled it away).

    • Wait, isn’t there a limit on overhead rates already? I vaguely remember that there didn’t used to be a limit, back in the Dark Ages when I was a grad student. And then Stanford or someplace that was charging 100% overhead or some crazy rate got caught using overhead money to smarten up the university president’s house or something, and it was a scandal, and the feds imposed a limit on overhead rates? Am I totally misremembering? (Quite possible; I’m sure some reader can correct me).

      In Canada, overhead rates are set by the federal funding agencies, as far as I know. And it’s not 50%–I think for NSERC it’s 30% or something like that? (Again, I’m sure some reader can correct me).

      The effects of a reduction in overhead rate would obviously depend on how universities use their overhead money.

      Perhaps worth noting that private funding sources, like foundations and NGO’s, increasingly don’t pay any overhead or “indirect costs” at all, taking the view that that’s somebody else’s job.

      • there is no limit on overhead rates that i know of. each university ‘negotiates’ its own. as the total award amount has stayed essentially flat, the increase in overhead rates has resulted in more money going to the university and less going to the researcher… not good.

      • There is no nationally-set maximum overhead rate in the US. Each university negotiates its overhead rate with either the Defense Contract Audit Agency (DCAA) or the Department of Health and Human Services (DHHS). Included in these negotiations are the items that universities can fund through indirect costs. In theory, the indirect cost rate scales with the services the university is providing and their local costs. /In theory…/

  13. I’m a bit late to this very interesting discussion, but what I’d really like to know is the success rate per PI, not per grant. That is, are there more people fighting for a constant amount of dollars leading to an increasing “unfunded rate”, or are there roughly the same number of people dividing up the same amount of dollars but with increasing number of proposals per PI?

    Also, no one has noted DEB’s “small grant” option :

    “The Division welcomes proposals for Small Grants to the core programs via this solicitation. Projects intending total budgets of $150,000 or less should be identified as such with the designation “SG:” as a prefix to the project title. These awards are intended to support full-fledged research projects that simply require smaller budgets. Small Grant projects will be assessed based on the same merit review criteria as all other proposals.”

    Maybe this is a backdoor attempt to Canadize the system?

    • “what I’d really like to know is the success rate per PI, not per grant. That is, are there more people fighting for a constant amount of dollars leading to an increasing “unfunded rate”, or are there roughly the same number of people dividing up the same amount of dollars but with increasing number of proposals per PI?”

      Good question, to which I don’t know the answer, though I assume NSF must. Though even if funding rate per person is constant, or at least not dropping as fast as success rate per application, there’s still the problem that people are spending more and more of their time writing (and reviewing) more and more proposals in order to keep their personal funding rates constant.

      In Canada, I believe the number of individuals applying for NSERC Discovery Grants has been trending upward, though I’d have to go back and look at the data to check. I think some of that is because increasing numbers of people outside academia in Canda are seeking adjunct status at Canadian universities so that they’re eligible to apply for NSERC grants. And that upward trend in number of applicants is one contributor to declining success rates in Canada.

    • Thanks for highlighting the small grant option. I don’t have a lot of details on how this works. The conventional wisdom I usually hear is if its the same amount of work to submit and the same odds of funding why submit a small number. To really make this appealing NSF would need to make the funding rate somewhat higher for SG.

      I don’t have stats either on per PI funding. I do think there is a rat race (or tragedy of the commons as one tweeter put it) where every PI tries to get a leg up by submitting more grants than their colleagues,which drives down accept rates which causes everybody to have to submit more, which sets a new standard number of submissions, which somebody then tries to exceed and etc.

      But I also think there is immense pressure at places that didn’t used to apply (Research Low, Masters only, even 4 year) and for every individual that didn’t always used to apply within R1 to now apply such that the number of PIs chasing fixed dollars is also going up very quickly. Indirects and tuition are about the only ways left for universities and colleges in the US to chase dollars amidst other declines (e.g. state dollars, endowment performance) and there is increasing pushback on tuition increases. Presidents and deans on down are communicating the importance of indirects loud and clear. This can’t help but cause more PIs to dive into the game.

      • I think DEB’s two-proposal limit was implemented to prevent the number of proposals per PI from skyrocketing, which is reasonable if you ask me.

      • I agree – I have no problem w/ a 2 per year per PI limit on the core program – it will cut down on the tragedy of commons problem I mentioned. One could go even further (indeed it appears to be a popular option according to the poll to say something like no more than one award every 5 years which would cut things down a lot more). It still leaves the growing number of people applying problem.

  14. Another excellent topic, thanks folks.

    The growing number of people applying for grants results from the growing number of departments offering PhDs. When I did my grad school apps way back when, I applied to several schools that were strong masters-only programs. Every one of the masters-only schools I applied to now has a PhD program.

    IMO, this is the result in part of an immense oversupply of PhDs. Schools that once focused on teaching wind up hiring top-notch researchers because the market is oversupplied and such people are readily available. After 10-20 yrs, they have a core of maybe 5-6 people who want to increase their research capabilities. The schools are good with that – more grants, more overhead – and now even more PhDs.

    I do think NSF and the science community in general have some responsibility to curtail the supply of PhDs, even if they don’t try specifically to match supply with demand. Jeremy’s idea of having more full-time lab techs and support is the way to go. How to get there not so clear.

    The PhD oversupply isn’t good for anyone. Smart and motivated people are working into their best earning years at subsistence wages on career paths that are dead ends. Ultimately, lower wages means lower taxes which means less funding for education in general – and there you have at least a small part of the reason why the research grant pie isn’t growing with the population and the size of the economy.

    Cheers folks.

  15. Pingback: Links 5/21/14 | Mike the Mad Biologist

  16. Pingback: Frogs jump? researcher consensus on solutions for NSF declining accept rates | Dynamic Ecology

  17. Pingback: On the differences between natural resource and biology departments | Dynamic Ecology

  18. Pingback: Poll: What is your risk/reward preference in science funding? | Small Pond Science

  19. Pingback: Not an April Fool’s joke: PI success rates at NSF are not dropping (much) | Dynamic Ecology

  20. Pingback: Ask us anything: what are the most common mistakes in grant proposals? | Dynamic Ecology

Leave a Comment

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.