Overview of NSERC Discovery Grant competition results (UPDATED)

I’m a bit late to this, but I just had a look at the summary of this year’s NSERC Discovery Grant competition results. Summary and a few comments below the fold. For comparison, a summary of last year’s numbers is here.

tl;dr: Basically everything was the same as last year.

UPDATE: I screwed up and initially published a very rough draft of this post that I wrote late last night. My bad. Please ignore that version and go with the version below, which I revised after sleeping on it.

  • NSERC unfortunately only provides data on the Discovery Program budget in nominal, not real (inflation-adjusted) terms. In nominal terms it’s up 7% since 2010-11, but I’m too lazy to look that up in real terms. I’d guess it’s roughly flat or a slight cut?
  • Numbers of applications from early career researchers (roughly, people who’ve never held a DG before), renewers, and experienced researchers not renewing a DG (roughly, people who had one in the past but aren’t renewing a current one) were all very similar to last year. Longer-term, applications from earlier career folks are back close to 2010 levels after bottoming out in 2014. Renewal applications have been flat since a drop in 2014. Non-renewal applications from experienced researchers also dropped a touch in 2013-14 and have been flat since.
  • Total number of grants is up slightly from last year to it’s highest level in several years, though we’re not talking about big fluctuations here.
  • Average grant size has basically been flat in nominal terms since 2013-14 at about $35K/year.
  • Early career researchers had a success rate of 75% and an average grant size of $26.7K/year. Renewers had a success rate of 82% and an average grant size of $36.4K/year. Experienced non-renewers had a success rate of 37% and an average grant size of $27.8K. Those numbers are all similar to last year, except that the early career researcher success rate was up 10 percentage points. Longer-term, success rates for all three groups are on a slow upward trend since 2010.
  • Success rates for the Evolution and Ecology evaluation group were 67% for ECRs, 87% for renewers, and 36% for experienced non-renewers, with average grants sizes similar to those for DGs as a whole.
  • The distribution of scores across the 16 “quality bins” is almost identical to previous years.
  • Regarding gender balance: at the asst. and associate professor levels, applications from women outnumbered those from men by a non-trivial amount. The reverse was true at the full professor level. A substantial proportion of applicants at all levels did not indicate their gender.
  • Success rates and average award sizes for women, men, and applicants who chose not to indicate their gender were almost identical, as they were last year (I haven’t looked further back). Gender neutrality of DG outcomes shows that the NSERC review system is able to avoid at least some forms of bias.
  • There’s been a lot of online discussion recently of the fact that NSERC DG success rates correlate positively with the size of the institution employing the applicant, and whether there is bias against applicants from smaller institutions. See Murray et al. 2016 (the Plos One paper that kicked off discussion of this issue, this editorial by Morris et al., and this critique from Alex Usher). I won’t say much about this, having recently poorly-expressed some poor thoughts on this, and having not had much chance to think further. So I’ll just summarize the main issues for readers who are new to this debate, and point you to the discussion at the links above and to the most relevant data in this year’s NSERC DG report…To summarize the online discussion, much of it concerns whether you can use the data on how DG applicants from different-sized institutions are scored, without other data, to completely separate (i) bias in the very narrow sense of “if the exact same application had its institutional affiliation swapped, it’d have been scored differently by the panel”, (ii) bias in the broader sense of “cumulative effects of past biases (including those operating before the applicant became a faculty member) that show up in the applicant’s track record, resulting in applicants from different-sized institutions tending to have different track records”, and (iii) effects of attributes of the applicant and the institution that correlate with institution size and that affect the applicant’s track record and other parts of the application, but for which bias is not the best term (e.g., teaching load, presence/absence of a graduate program, others). Murray et al. try to separate (i)-(iii) by looking at how ECRs from different sized institutions are scored. That analysis obviously depends on some background assumptions, about which there’s debate in the last two linked pieces. Tables 6 and Fig. 9 in the linked NSERC report present the most fine-grained data related to institution size.

Leave a Comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s