There is a belief that dropping a frog into hot water will cause it to react and immediately jump out, while putting it in a pan of cool water and slowly warming will cause the frog to never notice until it is boiled. Here in Maine you hear the same debate about how to cook a lobster. Whether the frog myth is true or not is debatable (although it is clearly sadistic). But it has become a common metaphor for failing to notice or respond to small incremental changes which when taken in the aggregate are terrible (fatal in the case of the frog). We seem to have a bit of the same thing happening with the primary basic science funding agency in the US (the National Science Foundation or NSF). In this piece I want to a) argue that due to macro trends not the fault of NSF, the agency and their researchers are in a frog-boiling scenario, and b) attempt to kick-start an out-of-the-box big picture discussion about what should be done about it (akin to the frog realizing it needs to take bold action and jump out of the pot).
But first, I’ve already said it, but let me repeat it to be abundantly clear. This is NOT a criticism of NSF. Every single program officer I’ve ever dealt with has been a highly dedicated and helpful professional (not to mention they are also researchers and one of us), and NSF regularly gets rated by government auditors as one of the most efficient and well run branches of the government. Instead, these trends are being driven by macro trends beyond the control of NSF (or of us researchers). I’m sure NSF is just as aware of and unhappy about these trends as I am. I expect they also are having discussions about what to do about it. I have not been privy to those discussions and have no idea whether NSF would welcome the discussion I am promoting here or not, but I feel like this blog, with its tradition of civility and rational thinking might be a useful forum.
Why researchers at NSF are like frogs being slowly boiled – the macro trends
I am going to focus just on the environmental biology division (DEB), although I don’t think the story differs much anywhere else. I haven’t always been able to obtain the data I would like to have, but I’m pretty confident that the big picture trends I am about to present are quite accurate even if details are slightly off. The core, graph, which I’ve seen in various versions of NSF presentations for a while (including those to justify the switch to the preproposal process) is this:
This graph confirms what NSF has been saying – the number of proposals submitted keeps going up without any sign of stopping while the number of proposals actually funded is flat (a function of NSF funding being flat – see below). The result is that the success rate (% of proposals funded) is dropping. But adding trends lends and extending them to 2020 is my own contribution. The trend in success rate is here actually an overestimate due to the stimulus year in 2009 which was left in. According to a naive, straight line trend, success rate will reach 0% somewhere between 2019 and 2020! Of course nobody believes it will reach 0% And the alternative approach combining the other two trend lines gives roughly 200 proposals funded out of 2000 for 10% in 2010. But the trend line is not doing a terrible job; when I plug in the 2013 number from DEB of 7.3%* it is not that far from the tend line (and is already below the 10% number). Nobody knows what the exact number will be, but I think you can make a pretty good case that 7.3% last year was on trend and the trend is going to continue going down. A few percent (2%?) by 2020 seems realistic. All of this is the result of inexorable logic. The core formula here is: TotalBudget$=NumberProposals*Accept%*GrantSize$
NumberProposals is increasingly rapidly. Although harder to come by data on, my sense is that GrantSize$ is roughly constant (at least after adjusting for inflation) with good spread but a median and mode right around $500,000. Maybe there is a saving grace in TotalBudget$? Nope:
NSF appears to have had four phases – exponential growth in the early days (1950-1963), flat from 1963-1980. Strong growth from 1980 to about 2003. And then close to flat (actually 1.7%/year over inflation) from 2003-2013 (again a stimulus peak in 2009). Note that the growth periods were both bipartisan (as was the flat period from 1963-1980). Positive growth rates aren’t terrible and congratulations to NSF for achieving this in the current political climate. But when pitted against the doubling in NumberProposals, it might as well be zero growth for our purposes. It is a mug’s game to try to guess what will happen next, but most close observers of US politics expect since the debate has shifted to a partisan divide about whether to spend money at all and a resignation that the sequester is here to stay are not looking for big changes in research funding to come out of Congress anytime soon (see this editorial in Nature). So I am going to treat TotalBudget$ as flat line and beyond the control of NSF and researchers.
The number that probably deserves the most attention is NumberProposals. Why is this going up so quickly? I don’t know of hard data on this. There is obviously a self-reinforcing trend – if reject rates are high, I will submit more grant applications to be sure of getting a grant. But this only explains why the slope accelerates – it is not an explanation for why the initial trend is up. And there is certainly a red-queen effect. But in the end I suspect this is some combination of two factors: 1) the ever tighter job market (see this for a frightening graph on the ever widening gap between academic jobs and PhDs) which has led to ever higher expectations for tenure. To put it bluntly, places that 20 years ago didn’t/couldn’t expect grants from junior faculty to get tenure now can place that expectation because of the competition. and 2) as states bow out of the funding of their universities (and as private universities are still recovering from the stock crash), indirect money looks increasingly like a path out of financial difficulties. Obviously #1 (supply) and #2 (demand) for grant writing faculty reinforce each other.
So to summarize: TotalBudget$=NumberProposals*Accept%*GrantSize$. TotalBudget$ is more or less flat for the last decade and foreseeable future. NumberProposals is trending up at a good clip due to exogenous forces for the foreseeable future (barring some limits placed by NSF on number or proposals). So far GrantSize$ has been constant. This has meant Accept% is the only variable to counterbalance increasing NumberProposals. But Accept% is going to get ridiculously low in the very near future (if we’re not there already!). Part of the point of this post is maybe we need to put GrantSize$ and NumberProposals on the table too.
Some salient facts for a discussion of what to do
In the next section I will list some possible solutions, and hopefully readers will contribute more, but first I want to highlight two very salient results of metaresearch (research about research).
- Review panels are not very good at predicting which proposals will lead to the most successful outcomes. Some claim that review panels are at least good at separating good from bad at a coarse grain, although I am not even convinced of that. But two recent studies showed that panel rankings effectively have no predictive power of variables like number of papers, number of citations, citations of best paper! One study was done in the NIH cardiovascular panel and the other was done in our very own DEB Population and Evolutionary Processes panel by NSF program officers Sam Scheiner and Lynnette Bouchie. They found that the r2 between panel rank and various outcomes was between 0.01 and 0.10 (1-10% of variance explained) and were not significantly different than zero (and got worse when budget size, which was an outcome of ranking, was controlled for). UPDATE: as noted by author Sam Scheiner below in the comments – this applies only to the 30% of projects that were funded. Now traditional bibliometrics are not perfect but given that they looked at 3 metrics and impact factor was not one of them, I think the results are pretty robust.
- Research outcomes are sublinear with award size. Production does increase with award size, but best available (but still not conclusive) evidence from Fortin and Currie 2013 suggests that there are decreasing returns (a plot of research production vs. award size is an increasing, decelerating curve (e.g. like a Type II functional response).This means giving an extra $100,000 to somebody with a $1,000,000 buys less productivity increase then giving an extra $100,000 to somebody with $200,000 (or obviously to somebody with $0).
Just to repeat this is not a criticism of NSF. The exogenous drivers are beyond anybody’s control and simple budgetary math drives the rest. There is no simple or obvious answer. I certainly don’t have the answer. I just want to enumerate possibilities.
- Do nothing – low Accept% is OK – This is the business as usual scenario. Don’t make any drastic changes and just let the acceptance rate continue to drop to very close to zero. I actually think this might be the worst choice. Very low acceptance rates greatly increase the amount of randomness involved. They also ironically bias the panels to be conservative and select safe research (maybe even mediocre research) that won’t waste one of the precious awards, which is not good for the future of science. I recall being part of a discussion for an editorial board for a major journal where we all agreed the optimal accept rate was around 25-30%. Anything higher and you’re not selective. Anything lower and you start falling into traps of randomness and excessive caution. I think this is probably about the right number for grants too. Note that we are at about 1/4 of this rate. I personally don’t even consider the current acceptance rate of 7% acceptable but I cannot imagine anybody considers the rates of 1-2% that we’re headed towards to be acceptable. The other approaches all have problems too, but most of them are not as big as this one in my opinion.
- Drive down NumberProposals via applicant restrictions on career stage – You could only allow associate and full professors to apply on the basis they have the experience to make best use of the money. Alternatively you could only allow assistant professors to apply on the argument they are most cutting edge and most in need of establishing research programs. Arguably there is already a bias towards more senior researchers (although DEB numbers suggest not). But I don’t think this is a viable choice. You cannot tell an entire career stage they cannot get grants.
- Drive down NumberProposals via applicant restrictions on prior results – A number of studies have shown that nations that award grants based on personal records of the researcher do better than nations that award grants based on projects. You could limit those allowed to apply to those who have been productive in the recent past (15 papers in the last 5 years?). This of course biases against junior scientists although it places them all on an equal footing and gives them the power to become grant eligible. It probably also lops off the pressure from administrators in less research-intensive schools to start dreaming of a slice of the NSF indirect pie (while still allowing individual research productive researchers at those institutions to apply)..
- Drive down NumberProposals via lottery – Why not let the outcome be driven by random chance. This has the virtue of honesty (see fact #1 above).It also has the virtue of removing the stigma from not having a grant if people can’t be blamed for it. This would especially apply to tenure committees evaluating faculty by whether they have won the current, less acknowledged, NSF lottery
- Drive down NumberProposals via limitations on number of awarded grants (“sharing principals”) – You could also say that if you’ve had a grant in the last 5 years, you cannot apply again. This would lead to a more even distribution of funding across researchers.
- Decrease GrantSize$ – The one nobody wants to touch is maybe its time to stop giving out average grants of $500,000. Fact #2 strongly argues for this approach. Giving $50,000 to 10 people is almost guaranteed to go further than $500,000 to one person. It gets over that basic hump of having enough money to get into the field. It doesn’t have much room for summer salaries (or postdocs – postdoc funding would have to be addressed in a differnet fashion), but it would rapidly pump up the accept rate into reasonable levels and almost certainly buy more total research (and get universities to break their addiction to indirects). Note that this probably wouldn’t work alone without some other restriction on the number of grants one person can apply for, or everybody will just apply for 10x as many grants which would waste everybody’s time.
What do you think NSF should do? Vote by choosing up to three choices of how you think NSF should deal with the declining acceptance rates (and feel free to add more ideas in the comments):
I am really curious to see which approach(es) people prefer. I will save my own opinions for a comment after most votes have come in. But I definitely think it is time for the frogs (us) to jump out of the pot and take a different direction!
* Note that 7.3% is across all proposals to DEB. The blog post implies that the rates are lower on the core grants and higher on the non-core grants like OPUS, RCN, etc. They don’t give enough data to figure this out, but if I had to guess the core grants are funded a bit below 5% and the non-core grants are closer to 10%.