When I apply for an NSERC Discovery Grant, 1/3 of my evaluation score is based on my scientific productivity over the previous six years. NSERC calls this “excellence of the researcher”. Reviewers look at the quality, impact, and importance of my papers and other contributions to science. Many funding bodies do something similar, though the details vary.
NSERC instructs reviewers not to treat funding level as an indicator of excellence. You’re not supposed to infer that someone who has lots of funding is doing great science, or that someone who has little or none is doing weak science. But of course, funding is correlated with scientific productivity. No perfectly correlated or even linearly correlated, of course, but correlated. That’s the whole point of giving scientists money—so that they can produce more and better science! Which is exactly what any scientist will do, if given more money.
So here’s my question: when evaluating “excellence of the researcher”, should reviewers evaluate excellence relative to the amount of funding the researcher had? So that researchers who’ve been very productive—but also very well funded—would be evaluated less well than they would be if reviewers were just asked “how productive has the applicant been?”
I think there’s a strong argument that grant applicants should be evaluated relative to their previous funding level, although NSERC doesn’t provide any instruction to reviewers one way or the other.* Indeed, I know of at least one person who does this when reviewing NSERC Discovery Grants. But I don’t know how common it is. And it’s kind of difficult to do, for various reasons. For instance, how do you allow for differences in cost among different research approaches? How do you allow for the fact that probably every researcher’s productivity is some nonlinear, decelerating function of their funding, making it likely that researchers will less funding will be more productive on a per-dollar basis than researchers with more? And how do you allow for the fact that the height and shape of those nonlinear functions presumably varies among applicants? Although presumably such difficulties are mitigated in a system like NSERC’s, in which reviewers only make fairly coarse distinctions among applicants (scoring their “excellence” on a 6-point scale).
What do you think? As a conversation starter, here’s a little poll:
Looking forward to your comments.
UPDATE: I forgot it was a US holiday when I put this post up yesterday. #amateurhour So we didn’t get many votes. But of the 78 votes we got, 56% agreed that grant applicants should be judged relative to their previous funding level, 35% disagreed, and 9% weren’t sure. So based on this admittedly small and non-random sample, there’s a lot of disagreement on this issue!
*Do other funding agencies provide explicit instructions on this?
p.s. I’m intentionally not getting into the issue of whether funding agencies should look at applicants’ track records at all, or whether they should make use of that information in some different way than NSERC does (say, only asking if the applicant has the background and experience needed to carry out the proposed research). Those are interesting questions, but I’m setting them aside for purposes of this post.