The reward structure of science varies a lot across different fields and countries. It’s worth thinking about how different reward structures affect the behavior of scientists and thus the direction of science as a whole.
For instance, in the past I’ve discussed whether it’s suboptimal for funding agencies looking to maximize bang for the buck to give big grants to only a few people and nothing to most people. Conversely, Brian’s noted that, if you think of scientific productivity as reflecting your ability to jump a series of hurdles, then you expect a small fraction of scientists to be much more productive than the others. Arguably, those productive few should be rewarded accordingly. Or think of the possibility that the scarcity of tenure-track jobs, grants, and space in leading journals, relative to the demand for them, creates incentives for scientists to behave in ways that are bad for science as a whole (e.g., incentives to oversell or even falsify one’s results).
This new (unreviewed) preprint drove home to me just how much variation in reward structure there is among fields. It uses comparative analysis of citation patterns and other data to show that economics is much more hierarchically organized than other social science fields. By “hierarchically organized”, I mean that faculty and students associated with a relatively small number of top economics departments tend to publish a disproportionately large fraction of papers in top journals, tend to get hired as assistant professors, etc. In contrast, I bet if you did similar analyses of ecology, you’d find it less hierarchically organized than economics.* (Note that this preprint also covers other issues that aren’t relevant here.)
Your mileage may vary on whether you see strongly-hierarchical systems that give big rewards to small numbers of people as a good thing or a bad thing for a field as a whole. Economists Claudia Sahm and Paul Krugman have interesting comments on how the strong hierarchical organization of economics can be both good and bad for economics. For instance, Sahm suggests that a strongly hierarchical reward structure might reward people who have mastered the intellectual “status quo”. Which is a good thing insofar as the status quo is good. But it makes it hard for the field to cultivate alternative viewpoints as a hedge against the status quo being seriously flawed.
There’s at least a bit of modeling work on what sorts of reward structures promote the most rapid progress of science as a whole. For instance, Strevens (2003) argues that, under some seemingly-mild assumptions, a “winner take all” system in which all the rewards go to the first person to solve a scientific problem maximizes the probability that the problem will be solved. In part because it creates incentives for scientists collectively to diversify the range of approaches they take to solving the problem. See here (section 5.2) for a good discussion. Strevens (2003) argues that this explains why scientists care so much about priority, about rewarding the first person to discover something. Of course, Strevens (2003) considers a deliberately simplified and artificial situation, but I still found it interesting. It undermines the widespread intuition that “winner take all” reward structures incentivize intellectual conservatism over risk-taking. And I do think Strevens might be on to something here. For instance, his model resonates with David Hembry’s discussion of choosing between working in a system in which lots of other people are working, and the riskier but possibly more rewarding choice of working in a less popular system (further discussion here). It would be interesting to try to extend this sort of model to more realistic situations and see how the optimal reward structure changes (if it does), and how a “winner take all” reward system shapes the conduct of science. Indeed, Michael Strevens himself seems to be doing that, and maybe others are too (it’s not my field, so I have no idea).
Of course, the reward structure that’s best for science as a whole might be less than desirable from the perspective of some or even many individual scientists. Lots that could be said here, obviously. It’s my anecdotal impression that arguments about lots of aspects of how science works–not just the reward structure–come down to interlinked disagreements about what’s best for science as a whole, vs. what’s best for individual scientists. It’s also my anecdotal impression that what’s best for science as a whole, and what’s best for individual scientists, often gets conflated. That’s unfortunate; I think it’s important to keep those two things separate. For instance, I think that by far the strongest argument for an NSERC-type grant funding system with high success rates but a modest average grant size is the argument that it’s good for science as a whole, for various reasons. That it also makes individual PIs happy is a pleasant side effect, but not an argument for the system. It’s not NSERC’s job to make PIs happy, it’s NSERC’s job to buy as much good science as it can.
Some aspects of the reward structure of science would be much easier to change than others, if we wanted to. For instance, a granting agency that currently gives out only a few big grants can just decide to give out more, smaller grants. In contrast, the distribution of attention paid to pretty much anything people pay attention to is very highly skewed, with a small fraction of stuff garnering a large fraction of the audience’s collective attention. That’s as true for scientific papers and citations as anything else, so it’s hard to see how you could change that even if you wanted to.
I don’t have any answers here. It’s not obvious to me that there’s any single “best” reward structure for all fields in all circumstances. For instance, if you look at various measures of “bang for the buck”, you find that countries with quite differently-structured funding systems all are scientifically productive. Rather, my intuition is that there are trade-offs here, so that different reward structures each have their own strengths and weaknesses, and that particular strengths and particular weaknesses tend to go hand in hand. Further, it’s certainly possible that all existing reward structures fall short of what would be optimal in some hypothetical “ideal” world.
Looking forward to comments, particularly anyone who can link to further data and modeling on this.
*There are some similar analyses of other fields. For instance, in biomedical fields in the US, the majority of new assistant professors did a postdoc with a member of the National Academy of Sciences.
>>a granting agency that currently gives out only a few big grants can just decide to give out more, smaller grants.
While increasing the number of grant awards would certainly help many unfunded scientists, the deeper problem with the current reward structure is that funding is tied to grant proposals. Writing and evaluating such proposals takes a lot of time from active research, yet there is no indication that the system is optimal.
Ioannidis (2011) Nature, 477, 529–31 suggested an alternative system that funds people not projects. If you are doing good reproducible work, you get funding without any need to write proposals describing your future research.
A proposal-free reward structure can be, in my view, similar to credit-based system. The more good science you do, the better is your “credit” and the more funding you are eligible for. If you do bad science, e.g. publish flashy but irreproducable results, hoard your data, or publish low quality papers, then your “credit” score is damaged and you are eligible for much less funding.
While implementing such a reward structure is very challenging, I think it would reduce waste and promote true and reproducable science.
NSERC’s Discovery Grant system is something of a hybrid. It funds people by funding their long-term research programs (as opposed to individual projects within those programs). You have to write one 5-page grant every 5 years to get your funding renewed, with a quite high renewal rate (over 60%). 1/3 of the evaluation of your renewal application is based on the quality of your work over the previous 6 years. (1/3 is on the quality of your proposal, and–somewhat controversially–1/3 is on your training of “highly qualified personnel”. Students, technicians, postdocs, etc. That HQP training is weighted so heavily is viewed by some as a sop to politicians who see scientific research as a job training program.). Finally, note that while renewal rates are high, the amount of funding you get can go up or down by a fair bit, depending on your evaluation relative to other applicants.
I like the NSERC system. I think it creates incentives and rewards outstanding work, without creating lots of wasted time writing, rewriting, and reviewing proposals, and without putting all of NSERC’s eggs into too few baskets.
I think if you just reward people based on their track record, you run too much risk of ending up with a Matthew Effect-based system that makes it too hard for junior people to get a foothold, and makes it too easy for established people to rest on their laurels. Put another way, I think your plans for your research program for the next 5 years provide additional useful information to reviewers, above and beyond your track record from the last 6 years.
As for the basis on which to evaluate the track records of PIs, it sounds to me like you want to intermingle different sorts of criteria. NSERC already asks reviewers to evaluate if applicants have done good work that advances the field. But you also list things like whether the applicants are “hoarding” data. If you want applicants to do things like share data in order to improve the functioning of science as a whole, then you need to mandate that as a condition of receiving funding. Many funding agencies already do this, of course.
I think it’s important to notice that in science, “reward” is intimately tied with “ability to do any work at all.” Our rewards are positions, opportunities, and money to keep producing work. If you don’t get rewarded, you quickly lose the resources to compete. It may be true that only rewarding the top few is a good incentive in a situation like a race, where everyone runs – but for us, if you only reward the top few, pretty soon only a few are running. Maybe they’re the best, but it’s hard to know, because so few people are in the race at all.
Our reward system needs to be both incentive and facilitator. We need to provide enough small grants to get new folks into the race, as well as some big ones to reward the best of the best.
“I think it’s important to notice that in science, “reward” is intimately tied with “ability to do any work at all…if you only reward the top few, pretty soon only a few are running.””
Good point. That’s the sort of thing I think you’d look at if you wanted to build on Strevens’ work.
On the other hand, in order to think about that you need to think about all the things that keep people in the race. Your point about small grants to help new folks get into the race is a good one; that’s why NSERC for instance requires new investigators to meet a somewhat lower standard in order to qualify for funding, as compared to people renewing an existing grant. But more broadly, I don’t know of any evidence that, say, low NSF success rates are reducing the total number of scientists. The total number of scientists mostly depends on things like university operating budgets and how they’re allocated. You can certainly argue that giving big grants to a few people is an inefficient allocation of money, but I’m not sure it’s inefficient because in the long term it shrinks the pool of people to whom one could give grants. Then again, you could argue that failure to get grants might not cause lots of profs to leave their jobs, but might degrade their capacity or willingness to do scientific research in other ways.
NSF and other funding agencies certainly can affect the number of graduate students and other trainees through choices of what sort of grants to give and what to allow PIs to spend money on. And like a lot of people I’m distressed by the possibility that far more students may enter graduate school in the hopes of going on in academic science than will ever be able to do so. I’m very lucky to hold my dream job, and I wish everybody who shares that dream could live it too. But I wonder if this isn’t one of those things that’s a problem from the perspective of individual scientists (or prospective scientists), but not from the perspective of science as a whole. From the perspective of science as a whole–how fast we learn how the world works–it might well be a really good thing to have lots and lots of grad students and other trainees. And it might well be that the number and typical skill/talent level of people who want to go to grad school in science isn’t all that sensitive to the prospects of a tenure-track professorship (instead depending on things like macroeconomic conditions, the demand for graduate degrees as a form of “signalling”, etc.). I confess I’m not sure how much NSF or similar national funding agencies can or should do to change the career structure of academic science. Say, to create a world in which there are many fewer grad students, and more long-term soft money positions for research scientists, staff scientists, etc.
Two comments. I would be interested in also seeing how different structures who stays in the game. I think toughlittlebirds was getting at this: some structures might *seem* to produce the best science, but because that structure pushes out a biased fraction of scientists, you’re probably not getting the best science. It’s been shown for example that successful professional women who are also mothers outperform their male and non-mother female counterparts in specific ways. But there are few of them because only the most accomplished “superwomen” can manage to climb the ladder while also managing a family (and typically also a household). Meanwhile the system doesn’t push out fathers in the same way, so while you have the “best” women who are parents, they are diluted by lesser performers on the male side. You could say the same about how implicit bias disfavors women, racial and other minorities, and structural bias disfavors involved parents, socioeconomicly disadvantaged people, people with chronic health conditions, etc. The system ensures that you will never get a representative set of these groups and so will be missing out on some of the “best” people. I would hypothesize that the more competitive and stochastic the rewards, the more you’ll lose people from underrepresented groups.
Good comments Margaret. I don’t know if there’s any data on whether different sorts of reward structures tend to select for different types of people, and how strong the effect is relative to other effects that might bias the composition of a field. Certainly, economics is both more hierarchical than other social science fields, and is much more male-biased. But that male bias might have other sources. Or you could try cross-country comparisons; NSERC grants are easier to get than, say, NSF grants, and maternity leave policies are much better in Canada than in the US. But I’m not sure if there’s any signal of that in the gender balance of the fields NSERC funds. In saying that, I don’t mean to indicate skepticism about the possibility you raise. I just don’t know.
Yes, fair enough. I think there’s little (if any) data on these things.
Second comment: I’ve been thinking recently about how different science *activities* are rewarded. For example, there’s little reward in producing an awesome dataset that can be used by large numbers of people besides yourself unless you then capitalize on building papers off of it. There’s little reward to producing software tools that help many others do their work. There’s little reward at being good at building cross-disciplinary teams unless you also are good at forging your own self-centered research. There’s little reward for being good at communicating science beyond academia. I think there are many science activities that are these days very important for moving science forward, but are not rewarded because the structure is too rooted in the past single PI, siloed, tech-light focused way of doing science.
Good question, though I think it’s orthogonal to the question asked in the post. For instance, I can imagine a hypothetical world in which producing a dataset or software used by many others is highly valued, but in which you don’t get any reward unless your dataset or software *is* used by many others. Indeed, I think that hypothetical world is going to come closer to reality in future, as things like software building are increasingly valued, but it remains the case that universities still hire individuals into faculty positions. If you’re hiring individuals, you’re going to want to hire the *best* individuals, as judged by *some* criterion or other. Even if the criteria are, say, “someone who’s good at building teams, and producing data and software others will use”. A world in which everybody really values data sharing and writing open source software and working collaboratively will still be a quite competitive, hierarchical world in which individual scientists are evaluated and ranked (I think).
One way that might change (sort of) is if universities were to quit viewing departments as collections of individual faculty, and starting treating them as mini-corporations. There’s a CEO at the top, whose job it is to hire a large team of people who will all bring their own specialist skills to the table and put them to work *as directed* for the good of the team. Personally, I’d be horrified at that change, even if I was the CEO, but your mileage may vary. Perhaps my horror at that model just shows that I’m a siloed single PI (which I am!)
I also think it’s important to distinguish inputs and outputs here. Some of the issues you identify involve undervalued outputs, like datasets and software lots of people use, and I agree that we need to value those outputs. But others, like team-building, sound to me more like inputs. My head of department, and my funding agency, care much more about the science I do (my outputs) than how I do it (my inputs). They don’t care about, say, whether I’m highly collaborative–they only care whether whatever collaborations or solo projects I have lead to good science. In general, I think that’s a good policy. I do not think employers and funding agencies should be in the business of encouraging certain ways of working over others–collaboration for collaboration’s sake, or whatever. Obviously, they shouldn’t be in the business of discouraging them either (e.g., not hiring people who don’t collect their own data on the grounds that if you don’t collect your own data you’re a ‘parasite’ or something.).
I’d also note that just because some of the things on your list aren’t especially valued in *faculty* doesn’t necessarily mean they aren’t valued. In particular, many universities, museums, and other institutions have valued staff whose full-time job is science communication, public education, and outreach. Division of labor can be a great thing. I’m not saying we shouldn’t value outreach by professors at all, of course! But I don’t think any and all science-related activities have to be valued by being made part of the job description of faculty.
I think you can’t design rewards systems until you have a clear knowledge of how science progresses. Does science progress more by a few colossally important discoveries or is it lots of individual bricks building up a wall? The former demands a very concentrated reward system, the latter demands a very diffuse reward system (due to the saturating benefits of dollars for an individual researcher). I use to think it was the first modality (its much more exciting and how TV presents it). But the more I hang around, I increasingly think the bricks/wall metaphor is closer and that most major discoveries are “in the air” ready to be pulled down by a lot of people standing on that wall built up brick by brick. Who pulls them down (and then gets credit) is as much about luck as skill.
<< most major discoveries are “in the air” ready to be pulled down by a lot of people standing on that wall built up brick by brick. Who pulls them down (and then gets credit) is as much about luck as skill.
That's a nice and visual description if a bit idealistic in my view. The problem I see is that institutions, at least in the US, reward PIs not so much for pulling down discoveries "in the air", but for bringing in grants from NIH and NSF. The more $$$ you bring, the merrier it is. In fact, some departments routinely publish the total grant amounts that its faculty win and advertise those amounts as the indication of quality and merit. It's a perverse system that rewards inputs, as Jeremy calls them, rather than outputs.
Ioannidis offered another analogy – it's like to reward a painter for winning expensive brushes and paints rather than the quality of his paintings.
The shift in the US funding system to reward a few faculty very large amounts – "winners take all" approach only exacerbated the problem in the current reward structure.
I completely agree with you about rewards based on inputs instead of outputs getting out of hand.
I still stand by my opinion that the 1 or 2 people who get the credit for major discoveries are: a) part of a larger set of 5-10 people who could each be argued to deserve credit, b) a lot of people are not that far behind, and c) are benefiting from less prominent work leading up to there.
All of which makes the winner take all approach be it based on grants or papers a bit absurd.
@Brian:
Re: credit for major discoveries, Peter Bowler’s Darwin Deleted is relevant here: https://dynamicecology.wordpress.com/2013/10/31/book-review-darwin-deleted-by-peter-j-bowler/
Hmm, I’m not sure that advances via colossally important discoveries demand a concentrated reward system. If you don’t know who might come up with a big discovery, don’t you want to hedge your bets and give anybody who might possibly discover something big some opportunity to do so?
EDIT: My reply was too hasty, I didn’t pay sufficient attention to your point that discoveries that look like big advances, and that we attribute to individuals, are actually small advances that we shouldn’t attribute to the individuals making them because they’re “in the air”. Still mulling over what sort of reward system you’d want, if that’s indeed the way science works.
There’s a column in Science this week on the issues raised in this post (http://www.sciencemag.org/content/346/6215/1422.full). Unfortunately, it’s just a bunch of assertions (some of them contradictory), unbacked by any data or even modeling. Like the claim that grad students are largely unaware of their chances of an academic career (I’m not sure that’s true anymore). Juxtaposed with the claim that many students leave for more practical and lucrative careers (implying that students are only too aware of their chances of an academic career, rather than unaware…). And the claim that a highly hierarchical reward system selects *against* the best people (or at least, selects for them less well than a less-hierarchical reward system would). And the complaint about too many people leaving the academic career path, juxtaposed with the claim that we need to prevent so many people from entering the career path in the first place. (To which: huh? Either we have oversupply of academic scientists, or undersupply. Make up your mind.)
It might well be that the world the authors of that piece want–one in which grants are easier to get, and there are fewer grad students and postdocs and more staff scientists–is indeed one that would both produce more, better science, and have happier scientists. But man, their arguments for it are weak. In fairness, that’s in part because the relevant data don’t exist. But lack of data doesn’t mean you get to just assert whatever you want, or contradict yourself.
Jeremy, I think that the reward system is among the most influential factors shaping the functioning of the scientific community. I enjoyed your post and I agree that it is worth separating what is good for the science and what is good for the individual scientists. However, these two questions are not entirely independent. It is hard to imagine unhealthy and unhappy bees producing a lot of good honey. Unhappy scientists spending much time complaining are unlikely to do the best research that they are capable of.
I think that the reward system should make it hard for PIs to excel to the top by emphasizing quantity of outputs at the expense of their quality. Quantity is very easy to buy with grant money; hire more people and they will produce more outputs.Thus the more uneven the allocation of funding, the more likely it is for the best funded PIs to excel simply by having large armies of researchers working on the fashionable next-step obvious experiments. Conversely, shrinking the variance of the funding distribution will put the emphasis on excelling by the quality of our conceptual ideas and will enable more people to stay in the game, as pointed out by toughlittlebirds.
Besides keeping “best for science” and “best for individual scientists” separate, I think that “best for science” is something of a black box that needs to be opened up. Many different things are conflated there as well, e.g. ability to solve large “hot” topics (where large grants, and a race to be first might be useful), progression of the field as a whole (where smaller grants might be useful), conseptual leaps (smaller grants useful? -> many lottery tickets) among others. However, these goals don’t need to compete with each other. Another question is if “best for science” includes best for higher-level education (bachelor + master)? If so, this should favor smaller grants, since it is generally thought that higher-level education should have a fairly strong research connection, which is only possible if we have active researchers at all research institutions and not only huge well-funded (but few) research groups.
@Tobias:
“I think that “best for science” is something of a black box that needs to be opened up.”
Yup. This gets back to my remarks in the post, and is related to the comments about Pierce below. I think you’re right that we probably want a mixture of different reward systems aimed at different sorts of problems. Which of course then raises the question of the “optimal” mix, though I’m not sure that’s something that can be optimized in any sensible way. It’s like trying to optimize life.
Yeah, would be nice to get hold of the essay by Pierce and interesting to read his ideas and solutions from so far back, under a completely different funding situation. What is the long-term project you are referring to? Or is it a secret…?
“What is the long-term project you are referring to? Or is it a secret…?”
It’s a secret, sorry.
I wasn’t aware of this: Charles Sanders Pierce proposed a mathematical model of the optimal allocation of scientific funding in *1876*!
https://afinetheorem.wordpress.com/2012/01/16/note-on-the-theory-of-the-economy-of-research-c-s-peirce-1879/
I’m planning to read more Pierce, and more about him, as background research for a long-term project…
Interesting find about Pierce. From the link you posted: “new research fields where we know very little are particularly worthwhile investments: the gains from increasing our knowledge are exponential in ignorance, whereas the cost is linear.”
I wish the NSF or NIH adhered to the above logic. On the contrary, the granting agencies consider new research fields as risky, which makes it harder to obtain grants in new research areas.
Yes, I want to go read the whole essay. Presumably, Pierce is making some implicit and/or explicit assumptions about how science progresses that differ from the implicit and/or explicit assumptions of NSF or NIH.
If you didn’t notice, the entire essay is readable at google books from the link at the bottom of the blog post.
Pierce – Note on the Theory of the Economy of Research:
http://books.google.se/books?id=ux79s_IhpFYC&pg=PA183#v=onepage&q&f=false