On a bad argument for grant lotteries

Nature recently did an interesting news story on the growing trend for scientific funding agencies to hand out grants via lottery, from among the proposals judged to be fundable. I thought the article did a nice job touching on the various arguments for and against grant lotteries. But I was struck by a quote at the very end from economist Margit Osterloh, an advocate of grant (and publication) lotteries:

If you know you have got a grant or a publication which is selected partly randomly then you will know very well you are not the king of the Universe, which makes you more humble. This is exactly what science needs.”

Ok, this isn’t a big deal. It’s one quote in one article, and based on a skim of some papers it’s not Margit Osterloh’s main reason for favoring grant lotteries. That said, it’s a very puzzling small deal. So at the admitted risk of talking about something that might be best ignored, I’m going to talk about it a bit.

Perhaps Margit Osterloh moves in very different circles than me. Because the scientists I know are well aware that getting a grant (or a publication in a highly selective journal like Nature or Science) is already partly a random process, even if they themselves have been quite successful at grant-getting! Indeed, their knowledge that grant-getting already has a random component is one reason why they might favor grant lotteries (“it’s a partially-random process anyway, we might as well make the randomness official”).

Further, the scientists I know do not think of themselves as kings of the Universe and they don’t need to be taken down a peg! Indeed, my experience is that it’s more common for scientists, especially junior and trainee scientists, to have too little confidence in their own abilities and in the quality of their own scientific work (think imposter syndrome, for instance). Ok, no doubt there’s a minority of arrogant, entitled scientists in the world. But does anyone really think that moving to grant lotteries will make those scientists less arrogant and entitled?

Finally, do we really want the structure of the scientific funding system to be, um, whatever somebody thinks will cultivate moral virtue in scientists? If you are at all tempted by that line of thinking, then I strongly recommend you read this criticism of that line of thinking in a different context. Government funding cuts to safety net programs often are justified on the bullshit grounds that those affected by the cuts will be forced to become more morally virtuous. Do we really want to encourage governments to apply that same bullshit argument to any aspect of science funding (either the amount of funding, or how the funding is allocated)?

In conclusion, there are certainly good, or at least defensible, arguments for grant lotteries. But “they’ll make scientists better people” is not one of them.

p.s. Osterloh’s co-author is Bruno Frey. I leave it to you to decide whether Bruno Frey’s criticisms of current scientific publishing and grant-giving practices should be discounted because of his professional history.

5 thoughts on “On a bad argument for grant lotteries

  1. I haven’t read any of the links you link to so am not commenting on them. But I’ve always assumed the best argument for a lottery system, is that it is an accurate description of how grants are given out. The statistics on repeatability of scoring of grant proposals or on correlations between scoring and outcomes are surprisingly low. The answer is always “yes but at least we can separate good from bad proposals” to which I would say I haven’t actually seen data that demonstrates that. I would believe you could trim off the obviously bad 10% from people who aren’t qualified or who didn’t try at all, but separate the top 50% from the next 40%, I would say show me the data.

    If accepting that grants are really uncorrelated with quality and effectively lotteries based on factors like faddishness of topic, mainstreamness of topic, composition of review panel, luck of reviewers, (not to mention various possible implicit biases) etc, then there are two benefits to actually going to a lottery. A) Fairer – these arbitrary factors just mentioned are removed. In particular grant panels notoriously “play it safe” – you might actually get more bold proposals funded. B) Our judgements used in things like tenure based on who got a grant or did not get a grant would have to come more into line with how we should judge that factor.

    • I agree with your list as a summary of the strongest arguments for grant lotteries, though I might quibble a bit with the details (the data I’ve seen suggests that grant review panels can do somewhat better than just trim a small percentage of obviously-bad proposals). But that really is a very minor quibble. I’d add C) it’s less work for grant review panels to just identify the fundable proposals and then hold a lottery than it is for them to rank all the fundable proposals and pick out a small fraction of those fundable proposals to fund. One question to ask is how much is lost to investigators, and to grant panelists, if panelists do less work evaluating proposals and so give little or no feedback to applicants, or to one another, as to the strengths and weaknesses of individual proposals. How important is feedback from grant review panels to our collective ability to do good science? I don’t know the answer to that.

      Of course, one alternative to a lottery with a few big prizes is to just give most or all of the fundable proposals a bit of money. That’s basically the Canadian NSERC Discovery Grant system (ok, I’m glossing over details, but that’s the gist). Your views on whether the Canadian system is preferable to a lottery probably comes down to whether you’d prefer to see a few expensive projects funded, or many cheap projects funded (or many expensive projects partially funded, with investigators expected to chase down the remaining funding elsewhere, if they can). And it probably also comes down to the structure of other aspects of the funding ecosystem (e.g., what provision is made for postdoc and grad student funding in a system in which research grants aren’t big enough to pay for a postdoc or a grad student?)

      • Personally I would vote Canadian (many small) first, and lottery 2nd.

        I’m curious what actual data you think supports the notion of picking off bad proposals. I see this repeatedly asserted in papers published on grant evaluations, but then when I actually read the analyses/data they don’t actually address this question. I’m open to seeing contradictory data, but my impression it is mostly just an assertion repeated often enough (in the midst of data on other questions) to be perceived as supported by data.

      • In some of our old linkfests, we’ve linked to data showing that variation in grant review panel scores has some very modest ability to predict various measures of impact/influence. If we take those measures of impact/influence as measures of the “quality” of the science proposed in the proposal (and yes, there are problems with doing that), then panels have some modest ability to distinguish quality even among the high-quality proposals that get funded. Which I assume means that panels have somewhat greater ability to distinguish “quality” between funded and unfunded proposals.

        There’s also Charles Fox’s data showing that journal reviews and editor decisions have some ability to predict the subsequent impact of papers. If journals reviewers and editors have some ability to do that, it seems reasonable to think that grant review panels do too.

      • Not at all sure I agree.

        I think the paper you are thinking of is Scheiner & Bouchie (not Fox). It shows that among funded grants in NSF PCE the correlation between outcomes and ranking, after adjusting for size of grant (and effect of rankings on size of grant) has r=0.15, p=0.35 (i.e. trending but not even close to significant due to the high variability in grant outcomes). This is the same conclusion as the overall body of literature in my opinion (e.g. see opening paragraphs in Lauer et al 2015 studying National Heart Blood and Lung Institute grants). The summary lists half a dozen grant agencies with the same result including NSF and NIH. These studies cannot possibly prove that untenable grants were differentiated from viable grants (they only looked at funded grants), and give little general comfort for grant panels efficacy.

        Scheiner cites Bornmann et al 2008 as proof of picking off bad grants, but Bornmann et al is actually a study of graduate fellowships which is an entirely different kettle of fish from project-based grants. (And I would suggest evidence that evaluating researcher record controlled for career stage is another better way of awarding grants than panels evaluating projects).

        Having watched this literature carefully and just re-reviewed it quickly, I’m sticking with my claim that there is no data-based-evidence that panels have strong discriminatory ability between sucky grants and decent grants (unless you think a non-significant r=0.15 on funded grants should be extrapolated all the way to the other end of the pool quality). Even if you do want to make that argument, you are likely throwing away a lot of grants that are just unfashionable, risky, etc as “sucky” grants.

Leave a Comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.