Everybody complains that scientific funding agencies are too risk-averse (e.g.,here and here).That they prefer low risk, low reward, incremental research to riskier, potentially transformative projects.
I really ought to have an opinion about this. After all, I’m an experienced academic researcher and like all academic researchers I have a professional interest in government research funding policies. But to my embarrassment, I have no opinion. Because in all honesty, I have no idea what “risky” means in this context. And so I have no idea why “riskier” projects should have either higher expected payoffs, or higher odds of “transformative” payoffs.
But at least I’m in good company in puzzlement. Because by their own admission funding agencies don’t seem to know what “risky” or “transformative” mean either!
Ok, that’s hyperbole. For instance, NSF’s FAQs about “potentially transformative research” does define transformative research. But then it goes on to shoot down (or at least heavily hedge) every way I can think of identifying potentially transformative research at the proposal stage. Potentially transformative research might be high risk, high reward–but “risk” isn’t defined. It might be interdisciplinary research–but not necessarily. It might challenge conventional wisdom–but not necessarily. It might involve development of a new tool or technique–but not necessarily. All of which is fair enough, I guess; this isn’t the sort of thing about which one should expect mathematical precision. But if the idea is that reviewers and funding decision-makers will know potentially transformative research when they see it,* well, shouldn’t it be possible to say at least a bit more about what it typically looks like?
When a priori definitions fail us, a standard strategy is to point to examples instead. Let’s figure out what our implicit, difficult-to-articulate definition of “potentially transformative” or “high risk, high reward” research is, by compiling examples of research on which we’ve slapped those labels. Maybe some of you can do that (and I hope you will in the comments!). But I can’t, at least not very well.
For instance, when I think of “high risk, high reward” experiments, I think of crazy shots in the dark that we have no reason to think will work, but that would be amazing if they did. Tabletop cold fusion, for instance. Or an experiment former colleagues of mine once did to see if fruit flies incorporate DNA from their food into their own DNA. But somehow I doubt that “more funding for crazy shots in the dark” is what most people have in mind when they criticize funding agencies for funding only incremental research.**
So what do people have in mind when they call for more risky, potentially transformative research? Well, I bet most people would say Rich Lenski’s Long-Term Evolution Experiment has been transformative. But that’s hindsight. Was the LTEE “high risk” or “low risk” back when it was first proposed? I once told Rich Lenski that I could see arguing either way! To which Rich’s responded in two ways. First, he said (more or less) “it was some of both”. Second, he denied the premise of my question by characterizing the experiment in other ways: as “unusually abstract, open-ended, and nontraditional” rather than “risky” or “safe”.
On reflection, I think my uncertainty about all this has a few sources:
- Diversity of risks. There are many different sorts of risk to which a proposed research project might be subject. The risk that the PI won’t be able to collect all the proposed samples in the time available is one sort of risk. The risk that the thing that the PI proposes to look for just doesn’t exist is a second, quite different sort of risk. The risk that the PI won’t be able to invent the new equipment/mathematics/whatever needed to make progress is a third different kind of risk. The risk that a storm/fire/drought/war will destroy the proposed experiment is a fourth different kind of risk. And on and on. So just talking about “high risk” research as if “risk” was a single unidimensional thing seems unhelpful. Better to talk about what particular kinds of risks we aren’t taking enough of.
- Intermediate optima. Ok, the optimal amount of some risks is obviously zero or pretty low. You want to minimize the risk that you’ll lose all your data in an accident (although even there, the only way to reduce the risk of losing your data to literally zero would be to not collect any data at all!) But is there any risk for which the optimal level is “as high as possible”? I don’t think so. Which means that we’re looking for intermediate optima. Which makes matters difficult. How can you make the case that you are, say, groping around optimally far beyond the boundaries of what’s known for certain? That you’re working at the interface of an optimal number of different disciplines–not too few, not too many? Etc.
- “Risk” is often the wrong word. I like Rich Lenski’s characterization of the LTEE as “open ended” and “abstract”. I find that characterization much more helpful than trying to place the LTEE along a continuum from low risk to high risk. I even find it more helpful than characterizing the LTEE as low risk in some respects and high risk in others. My admittedly anecdotal sense is that we’ve all gotten so used to complaining about how “risk averse” funding agencies are that we’ve started using “low risk” as an catch-all term for everything we (think we) don’t like about work that gets funded (i.e. somebody else’s work rather than ours!).
- Lots of risky and/or potentially transformative research isn’t best funded through project grants. Complaining that project-based grant programs don’t fund sufficiently risky or potentially transformative research sometimes feels to me like complaining that your coffee maker can’t also drive you to work. You need different tools for different jobs. “Transform the discipline” often isn’t the sort of thing one can propose to do in a project grant. For instance, if you want physicists to explore a diverse range of approaches to unifying quantum mechanics and general relativity, it seems to me that the Perimeter Institute has the right idea. Don’t give short-term project grants. Rather, give long-term support to creative smart people and then get out of the way. As another example, Andrew Wiles’ wasn’t supported by a project grant to prove Fermat’s Last Theorem.
- UPDATE: further to the previous bullet, a correspondent points me to Gavem et al. (2017) who surveyed ecologists who’ve conducted research that was recognized in retrospect as being transformative. Even the people who do transformative research usually begin with incremental goals. Gavem et al. also make the subtler point that transformative research often isn’t recognized as transformative until long in retrospect. Which suggests that what makes it “transformative” has little or nothing to do with the research itself, and everything to do with the subsequent development of the discipline. Maybe all research is potentially transformative, and actually transformative research can’t be recognized in advance because it only becomes transformative later. For instance, it took many years for Bob Paine’s keystone predation work to come to be regarded as transformative. And Joe Connell’s 1961 field experiment on competition, which eventually came to be regarded as a pioneering exemplar of that approach, is buried deep in a paper that’s mostly not about competition and is otherwise forgotten today. One can easily imagine that if you “reran the tape of life” that Connell (1961) wouldn’t end up being regarded as transformational at all. Over on Earth 2 maybe some other random bit from some other ordinary paper from 1961 has since come to be regarded as a pioneering exemplar to be emulated by others. This suggests that asking applicants or evaluators to try to identify potentially transformative work is useless. Or maybe even worse than useless, if it makes applicants over-sell their work. As an aside, Gavem et al. is a very nice paper and I’m embarrassed I didn’t know about it when I wrote this post. It would’ve been better to just write a post plugging Gavem et al., since their paper covers the same ground much better than this post does.
What do you think? What is “risky” or “potentially transformative” research to you? What are some of the best ecological examples you’ve seen? Do the funding agencies with which you’re familiar support sufficiently risky or potentially transformative research? If not, how could funding agencies do better?
*I tried and failed to work in a Potter Stewart reference here. Why yes, I do like showing off my superficial general knowledge, why do you ask?
**Although maybe they should? It’s not always easy to tell the difference between fringey nonsense and really transformative ideas.