Tell me again what “risky” or “potentially transformative” research is? (UPDATED)

Everybody complains that scientific funding agencies are too risk-averse (e.g.,here and here).That they prefer low risk, low reward, incremental research to riskier, potentially transformative projects.

I really ought to have an opinion about this. After all, I’m an experienced academic researcher and like all academic researchers I have a professional interest in government research funding policies. But to my embarrassment, I have no opinion. Because in all honesty, I have no idea what “risky” means in this context. And so I have no idea why “riskier” projects should have either higher expected payoffs, or higher odds of “transformative” payoffs.

But at least I’m in good company in puzzlement. Because by their own admission funding agencies don’t seem to know what “risky” or “transformative” mean either!

Ok, that’s hyperbole. For instance, NSF’s FAQs about “potentially transformative research” does define transformative research. But then it goes on to shoot down (or at least heavily hedge) every way I can think of identifying potentially transformative research at the proposal stage. Potentially transformative research might be high risk, high reward–but “risk” isn’t defined. It might be interdisciplinary research–but not necessarily. It might challenge conventional wisdom–but not necessarily. It might involve development of a new tool or technique–but not necessarily. All of which is fair enough, I guess; this isn’t the sort of thing about which one should expect mathematical precision. But if the idea is that reviewers and funding decision-makers will know potentially transformative research when they see it,* well, shouldn’t it be possible to say at least a bit more about what it typically looks like?

When a priori definitions fail us, a standard strategy is to point to examples instead. Let’s figure out what our implicit, difficult-to-articulate definition of “potentially transformative” or “high risk, high reward” research is, by compiling examples of research on which we’ve slapped those labels. Maybe some of you can do that (and I hope you will in the comments!). But I can’t, at least not very well.

For instance, when I think of “high risk, high reward” experiments, I think of crazy shots in the dark that we have no reason to think will work, but that would be amazing if they did. Tabletop cold fusion, for instance. Or an experiment former colleagues of mine once did to see if fruit flies incorporate DNA from their food into their own DNA. But somehow I doubt that “more funding for crazy shots in the dark” is what most people have in mind when they criticize funding agencies for funding only incremental research.**

So what do people have in mind when they call for more risky, potentially transformative research? Well, I bet most people would say Rich Lenski’s Long-Term Evolution Experiment has been transformative. But that’s hindsight. Was the LTEE “high risk” or “low risk” back when it was first proposed? I once told Rich Lenski that I could see arguing either way! To which Rich’s responded in two ways. First, he said (more or less) “it was some of both”. Second, he denied the premise of my question by characterizing the experiment in other ways: as “unusually abstract, open-ended, and nontraditional” rather than “risky” or “safe”.

On reflection, I think my uncertainty about all this has a few sources:

  • Diversity of risks. There are many different sorts of risk to which a proposed research project might be subject. The risk that the PI won’t be able to collect all the proposed samples in the time available is one sort of risk. The risk that the thing that the PI proposes to look for just doesn’t exist is a second, quite different sort of risk. The risk that the PI won’t be able to invent the new equipment/mathematics/whatever needed to make progress is a third different kind of risk. The risk that a storm/fire/drought/war will destroy the proposed experiment is a fourth different kind of risk. And on and on. So just talking about “high risk” research as if “risk” was a single unidimensional thing seems unhelpful. Better to talk about what particular kinds of risks we aren’t taking enough of.
  • Intermediate optima. Ok, the optimal amount of some risks is obviously zero or pretty low. You want to minimize the risk that you’ll lose all your data in an accident (although even there, the only way to reduce the risk of losing your data to literally zero would be to not collect any data at all!) But is there any risk for which the optimal level is “as high as possible”? I don’t think so. Which means that we’re looking for intermediate optima. Which makes matters difficult. How can you make the case that you are, say, groping around optimally far beyond the boundaries of what’s known for certain? That you’re working at the interface of an optimal number of different disciplines–not too few, not too many? Etc.
  • “Risk” is often the wrong word. I like Rich Lenski’s characterization of the LTEE as “open ended” and “abstract”. I find that characterization much more helpful than trying to place the LTEE along a continuum from low risk to high risk. I even find it more helpful than characterizing the LTEE as low risk in some respects and high risk in others. My admittedly anecdotal sense is that we’ve all gotten so used to complaining about how “risk averse” funding agencies are that we’ve started using “low risk” as an catch-all term for everything we (think we) don’t like about work that gets funded (i.e. somebody else’s work rather than ours!).
  • Lots of risky and/or potentially transformative research isn’t best funded through project grants. Complaining that project-based grant programs don’t fund sufficiently risky or potentially transformative research sometimes feels to me like complaining that your coffee maker can’t also drive you to work. You need different tools for different jobs. “Transform the discipline” often isn’t the sort of thing one can propose to do in a project grant. For instance, if you want physicists to explore a diverse range of approaches to unifying quantum mechanics and general relativity, it seems to me that the Perimeter Institute has the right idea. Don’t give short-term project grants. Rather, give long-term support to creative smart people and then get out of the way. As another example, Andrew Wiles’ wasn’t supported by a project grant to prove Fermat’s Last Theorem.
  • UPDATE: further to the previous bullet, a correspondent points me to Gavem et al. (2017) who surveyed ecologists who’ve conducted research that was recognized in retrospect as being transformative. Even the people who do transformative research usually begin with incremental goals. Gavem et al. also make the subtler point that transformative research often isn’t recognized as transformative until long in retrospect. Which suggests that what makes it “transformative” has little or nothing to do with the research itself, and everything to do with the subsequent development of the discipline. Maybe all research is potentially transformative, and actually transformative research can’t be recognized in advance because it only becomes transformative later. For instance, it took many years for Bob Paine’s keystone predation work to come to be regarded as transformative. And Joe Connell’s 1961 field experiment on competition, which eventually came to be regarded as a pioneering exemplar of that approach, is buried deep in a paper that’s mostly not about competition and is otherwise forgotten today. One can easily imagine that if you “reran the tape of life” that Connell (1961) wouldn’t end up being regarded as transformational at all. Over on Earth 2 maybe some other random bit from some other ordinary paper from 1961 has since come to be regarded as a pioneering exemplar to be emulated by others. This suggests that asking applicants or evaluators to try to identify potentially transformative work is useless. Or maybe even worse than useless, if it makes applicants over-sell their work. As an aside, Gavem et al. is a very nice paper and I’m embarrassed I didn’t know about it when I wrote this post. It would’ve been better to just write a post plugging Gavem et al., since their paper covers the same ground much better than this post does.

What do you think? What is “risky” or “potentially transformative” research to you? What are some of the best ecological examples you’ve seen? Do the funding agencies with which you’re familiar support sufficiently risky or potentially transformative research? If not, how could funding agencies do better?

*I tried and failed to work in a Potter Stewart reference here. Why yes, I do like showing off my superficial general knowledge, why do you ask?

**Although maybe they should? It’s not always easy to tell the difference between fringey nonsense and really transformative ideas.

10 thoughts on “Tell me again what “risky” or “potentially transformative” research is? (UPDATED)

  1. Jeremy – I’m not sure how much your questions pertain to say NERC or NSERC vs just US funding agencies.

    But in the US, and specifically in response to NSF here’s my thoughts.
    1) There are two components of risk that are important: a) inability/failure to execute/complete the work proposed, and b) failure to get scientific publishable results. (a) is obviously worth managing to some degree but in extreme it blocks new researchers as well as researchers who are changing research areas. But panels talk about it a lot. I’m not sure there is a deep sense that we’re seriously miscalibrating on (a).
    2) I think the topic everybody is thinking about is (b) – you do the research and the results are a confusing mess and not many publications come out. As funding rates continue to hover in the single digits, panels are extremely adverse to this kind of risk. And I think this is not good and why everybody is talking about risk. While you’re probably right that intermediate levels of risk are optimal on this dimension too, I think the number of people talking about it is indicative that we are way below the optimum – i.e. too risk adverse. Lenski’s “open-ended and non-traditional” is equivalent to risky on this dimension. They may be better words than risk, but it is what everybody means when they talk about risk. To put it in different language, here’s my translation of what I think is the current calibration of risk “is there a >90% chance that 3 good papers will come out of the research”.
    3) As somebody who had 14 straight grant proposals rejected, the vast majority of which boiled down to “too ambitious, outcome not certain” I am certainly one of those people who thinks we are there. Everybody in the US knows you have to have your results half in hand as “preliminary” results before you apply for the grant to do the research. I’ve never been willing to play that game and its not a particularly relevant game for theoretical and data-driven research – by the time you have preliminary results your time is better spent writing a paper than a grant. To be blunt, having to have 90% certainty about the outcome clips the wings of a lot of potentially good science.
    4) I think to encourage panels to take more risk, NSF talks about “high risk, high reward” and “transformative” research, but (a) that hasn’t made panels take risks, and (b) as you correctly note they may be on separate dimensions and not trade-off, and (c) nobody knows a priori what is high reward or transformative. Grant panels aren’t even very good at predicting mundane outcomes like papers and citations.
    5) To my mind the research is pretty clear that giving moderate size grants based on past researcher track-record* is far more successful and productive than giving big grants based on projected success of a particular project. In short pretty close to what Canada does. It seems to still be a minority opinion in the US and even UK and Europe. This approach makes all these discussions of risk and transformation go away.
    6) Leaving funding models aside, it would seem to me that granting agencies should be focusing on research that is pushing the boundaries of the frontiers without trying to get into predicting whether the outcomes will be good or bad. To me the criteria is “if successful, will we learn a lot”. Or to return to my percentage language, I think the sweet spot is funding research with a 50% probability of a good outcome – it moves us a little closer to the frontiers and getting unexpected results that advance science. But again I am in a minority.
    7) So net, net, while I take your basic points, I do think under the current system in the US (especially at NSF, and to a large degree also in Europe and the UK) a discussion of whether panels are too conservative in the amount of risk they’re willing to tolerate is a valid and important conversation to have. You can pick on the poor choice of the word “risk” but it doesn’t invalidate the need for the conversation.

    *With appropriate mechanisms for new researchers to get into the stream – its not hard – Canada does it well.

    • Thank you for this Brian. Your #2 is clarifying to me, as someone who’s been out of the US for a long time and never in it as a PI. I now have a much better sense of what people mean when they complain that NSF or NIH is too risk-averse. Based on your comments, it does sound to me like NSF panels are typically too averse to the specific risk that you identify.

      I might only add that it’s not entirely clear to me that we’d get any more “transformative” research if NSF panels were less risk-averse in the sense you describe. In your remarks, you speak positively of two models. The NSERC Discovery Grant model is “just give some money to anyone with a decent track record and a sensible long-term plan, and let “transformative” results emerge where they will.” The “modified NSF” model is “try to pick the best project grants without worrying so much about preliminary data that almost guarantees a good outcome”. Those are two different models–which would you pick if forced to choose? Personally, I’d pick the NSERC model, but not because I think it’s more likely to produce “transformative” results. I’m not sure if the rate at which transformative results are produced is all that sensitive to the funding agency’s funding model.

      If I was making the case for your modified NSF model over the current NSF model, I don’t know that I’d claim the former will lead to more field-transforming breakthroughs. I might rather argue that the modified NSF model would encourage PIs to be more ambitious, and that it would better promote creative exploration of new ideas and approaches rather than refinement of existing ideas and approaches. I think one can make the case that we want PIs to be ambitious and creative, even if we aren’t confident that more ambitious and creative PIs would produce more field-transforming breakthroughs.

      But these are tentative thoughts…

      • I think we’re on the same page about transformative results. They are highly unpredictable (and probably depend on a lot of factors unrelated to the science including the mood of the community at the time and the flair for communication of the scientist and etc). Just look at the differing receptions of Caswell, Bell and Hubbell’s neutral theory. And I’m not sure I’m even a believer in the basic premise of the word – that science proceeds best in big leaps. So yeah, lets just take transformative science off the table as a goal.

        Novel, risky, bold (all words Lakatos used by the way) are the goals I’m looking for and would like to see funded. The bottom line is if we’re not learning new stuff and throwing away dated old stuff, what the heck are we doing as scientists? And specifically if we know the outcome with a good degree of certainty, should that be our priority to fund? And as you noted there is probably such a thing as too novel, risky, bold. But moving in the direction of more of those than currently would be my main goal.

      • The different receptions of Caswell’s, Bell’s, and Hubbell’s neutral theories is an excellent illustration of how stochasticity, presentation, and contextual factors determine whether some bit of science becomes “transformative” or not.

      • And as far as NSERC vs more risky NSF, I think the data available matches my intuition that funding research record is a better bet than funding projects (even with the dial for risk turned up).

  2. That Gavem et al paper is a really good read. Nearly all the scientists post hoc determined to have done transformative work didn’t know it until the analysis stage or later. And the perception of it as transformative by others built over time after publication.

    What I took out of that paper is that transformative work happens when you’re doing incremental work and either: a) get a surprising result, or b) you are randomly exposed to a previously unconnected idea while your brain is immersed in your current incremental work causing you to have biased but productive thinking about the idea you were randomly exposed to.

  3. Every ecological research project is risky & potentially transformative, but maybe I’m biased. I have no experience with the North American system and most Australian funding bodies have more focus on on-ground benefits (none that I know of have an FAQ about transformative research). I think there’s a difference between risky research that is based on (i) unproven ideas without ecological justification vs. (ii) unproven ideas with ecological justification. Also, is there an interaction between success of a so-called risky grant and the career stage of the applicant?

  4. Risky = anything that takes more than a year to produce a paper.
    Without that, you “productivity” will be criticised, That’s perhaps the most important reason for the reproducibility crisis.

  5. There at least “was” specific grant schemes to target transformative work.
    EAGER ( and CREATIV ( in the USA. My impression (which could be completely wrong) is that they had higher acceptance rates and were undersubscribed compared to typical NSF schemes. If this was the case, I think the reason is that it’s really hard to write a grant proposal for this type of work [where preliminary results or a body of past research is lacking]. So there may have been, understandably, fewer applications.

Leave a Comment

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.