How to revolutionize a scientific field in five not-so-easy steps

Just finished reading this very interesting 1971 address to the American Economics Association by Harry Johnson. He asks what determined the speed of the Keynesian revolution in economics, and the monetarist counter-revolution. In it, he suggests that a revolutionary theory has the following five characteristics:

  1. It has to attack–and ideally reverse–the central theoretical proposition of the prevailing orthodoxy. Ideally, the motivation for the attack should the inability of orthodox theory to explain some important empirical data.
  2. It has to be new, but yet also incorporate as much as possible of the less-disputable bits of the prevailing orthodoxy. Pulling off this apparently-contradictory trick generally requires putting old wine in new bottles: renaming established concepts without admitting that you’re doing so. It also can involves shifts in emphasis, for instance by emphasizing the importance and non-obviousness of points that were previously considered unimportant and obvious.
  3. It has to be too difficult for senior people to bother to understand, and somewhat-difficult-but-not-too-difficult for junior people to master. This gives junior people a way to work around the conservatism of senior leaders in the field, a reward for doing so, and a sense of belonging to a shared intellectual project.
  4. It has to offer some low-hanging fruit, especially to empirically-oriented researchers (as opposed to theoreticians). Ideally, it will give the junior people who master it a straightforward, “crank the handle” methodology for producing publications.
  5. It has to pick out a new, measurable empirical pattern or relationship. A stylized fact that empiricists can target for further investigation. This goes hand in hand with #4.

This story seems to fit the potted history of 20th century macroeconomics pretty well, though of course I’m no expert. So here’s my question: does it fit any revolutions in ecology? And does lack of any of the 5 attributes on this little list explain the failure of any attempted revolutions in ecology?

Just off the top of my head, I’m not sure ecology has had any revolutions that fit this scheme perfectly. Maybe that’s because, to have a revolution, there has to be an orthodoxy to revolt against? Can a scientific field in which there is no prevailing orthodoxy, or that arguably isn’t even a single discipline at all, be revolutionized?

The MacArthurian revolution starting in the late ’50s does seem like it fits some of the items on this list (see also). I’d say it fit #3-5 fairly well, and arguably #2 as well. Not so sure about #1 though.

The counter-revolution against (what was taken to be) the MacArthurian view in the late ’70s and early ’80s (the “null model wars” et al.) definitely fit #1 and I suppose arguably fit #3-4. Not sure about the others.

The attempted revolution of Hubbell’s neutral theory definitely  appeared to have #1, 2, 4, and 5. (Aside: neutral theory is a great case study for how an idea can take off in part by being widely misunderstood). Not sure about #3 though. And the revolution failed once everybody realized that it didn’t actually have #1 and #4.

What do you think? Have there been revolutions in ecology, and if so, have they fit this 5-part template? Looking forward to your comments, as always.

The “always have two papers in review” rule of thumb (UPDATEDX2)

UPDATE 2: This post seems to be attracting a fair bit of attention on Twitter, so: greetings reader who has perhaps never read this blog before, but who probably saw the title on Twitter and is perhaps already kind of upset with both the post and me. Welcome! I’m adding this second update to address some misunderstandings you may have because you’ve only read the post title.

  • Attempting to follow the rule of thumb stated in the post title has helped me, personally, achieve my goal of publishing high quality papers I can be proud of, at a rate (about 3-ish papers/year on average) that satisfies my employer and peers, and that is commensurate with me not working extremely long hours and having a life outside of work. But as the post states multiple times, YOUR MILEAGE MAY VARY. I welcome others saying “this rule of thumb wouldn’t work for me”, whether because they follow some other rule of thumb or no rule at all. That’s how we all learn about the diversity of practices that work for the diversity of people in the world. Thank you to the many folks who’ve indicated that they follow this rule, and to the many other folks who’ve indicated that they follow some other rule or no rule at all and who recognize that different approaches work for different people. But if you are incredulous or otherwise upset that anyone would attempt to follow the rule of thumb in the post title, despite me having said “your mileage may vary”, well, I’m honestly unsure what else I’m supposed to say in reply besides “your mileage may vary”. So: “your mileage may vary”.
  • You may think that anybody who follows the rule of thumb in the post title must publish some massive number of papers per year. If so, I’m afraid you’re incorrect. Again, I only publish 3 papers/year on average. The reasons for that, even though I aim to have 2 in review at any one time, are (i) I count revisions and collaborative papers for purposes of the rule of thumb in the post title, (ii) papers in my field often take several months in review, (iii) papers in my field almost invariably go through at least one round of revision that often requires additional months of review, and (iv) because I’m not always in conformity with the rule of thumb in the post title. If (ii)-(iii) sounds really different than how your field works, well, it is! Different fields are different (I’m an ecologist). A rule of thumb that might work for some people in one field might work for few or no people in some other field where journals operate differently.
  • The fact that I only publish an average of 3-ish papers/year explains why I can afford to follow the publication practices I do despite only having one grant for $26K/year (not unusual for Canadian ecologists) and only having an average of two grad students and no postdocs or technicians. No, my lab is not a paper production factory with a dozen grad students and several postdocs all chained to the benches, funded by a million dollar grant.
  • If, based on the previous bullets, you’ve decided that this post isn’t relevant to you and you’d rather not bother reading the rest, that’s fine (obviously!). Thanks for stopping by.

Continue reading

Bonus Friday link: NSF’s Bio Directorate removes proposal submission limit for 2019

For the US folks, NSF’s Bio Directorate had an important announcement yesterday, removing the limit on the number of proposals someone can submit as PI or co-PI in 2019. Here’s part of the announcement:

Having listened to community concern and tracked the current low rate of submission, and following extensive internal consultation, BIO is lifting all PI or co-PI restrictions on proposal submission for FY 2019, effective immediately.

BIO recognizes that it is important to track the effects of the no-deadline policy on proposal submission patterns, to ensure that a high-quality review process is sustained. Therefore, we are seeking approval from the Biological Sciences Advisory Committee to establish a subcommittee to assist in developing the evidence base for any future policy changes that may be needed.

I think this is great news! And I completely agree with Mike Kaspari:

So you got an email inviting you to apply for a tenure-track ecology faculty position. How should you interpret it?

So you got an email inviting you to apply for a tenure-track ecology faculty position. Perhaps from the search committee chair, perhaps from someone on the search committee, perhaps from someone in the hiring department. How should you interpret it? In particular, does it mean you’re a shoo-in to get an interview?

A similar question could be asked about responses to informal inquiries with the search committee chair. Say you email the search committee chair with your cv, asking if you fit the position, or if your application would be competitive. The search committee chair replies that yes, it looks like you fit the job ad, please do apply (or words to that effect). How should you interpret that?

Unusually for me, this isn’t an ecology faculty job market question that I can address with data. So what follows is just me speaking from my own admittedly-anecdotal-but-not-inconsiderable experience, and from what I’ve learned from speaking with more experienced colleagues (who aren’t responsible for anything I say). Hopefully commenters will chime in.

The goal here is just to share a bit of information about one narrow aspect of the ecology faculty job market. The purpose is descriptive, not prescriptive; I’m not here to judge the practice of inviting people to apply for faculty positions.

If you are on the faculty job market, I can’t promise this information will make you happy (sorry!). I would never presume to tell anyone how to feel about being on the very competitive ecology faculty job market.

Continue reading

Blind spots in scientific discovery

An interesting remark I came across: to learn how technological innovation happens, study the people who nearly produced some major innovation, but failed because of some blind spot that seems obvious in retrospect. One example from the link is the person who invented sound recording on a wax cylinder decades before Edison. The inventor had a blind spot: not considering playback, instead viewing recording as a form of stenography.

I’m now wondering if this applies to science. What scientific insights or discoveries were almost made by someone other than the person(s) who made them, except for a blind spot that prevented full, correct development of the insight or discovery?

I’d suggest Darwin’s theory of the origin of species. The Origin is tremendously successful at explaining the origin of adaptation, but its explanation of the origin of new species is infamously hard to pin down. Following James Costa (and I hope I haven’t misunderstood him!), I think that’s because Darwin had a blind spot: his “success breeds success” mental model of selection (to borrow Costa’s phrase). Darwin imagined that, when better variants arise, they eventually sweep to fixation everywhere that they can spread to. That mental model prevented him from quite recognizing modern notions like frequency-dependent selection, and caused him to underestimate the extent to which selection favors different variants in different places. So instead of hitting on modern ideas about how selection can drive speciation (Schluter 2000, Kassen 2014), Darwin ended up treating the production of diversity as itself a heritable trait that selection might favor, thereby promoting speciation (that’s Darwin’s “Principle of Divergence”). There are circumstances in which something like Darwin’s idea can work, for instance when there’s selection for bet hedging (Beaumont et al. 2009). But in general, it’s not a correct picture of the origin of species.

What other scientific discoveries or insights were prevented by some crucial blind spot? See here for a couple of possible examples.

When should scientists cite the work of sexual harassers?

Thanks to the #MeToo movement, prominent men (and a few women) in many walks of life are being held accountable for sexual harassment and bullying (good!). Academic science is no exception; think for instance of evolutionary biologist Francisco Ayala’s recent resignation from UC Irvine following a university investigation finding him guilty of serial sexual harassment.

Which raises the question of what forms accountability should take. Most obviously, there are official sanctions imposed by employers, the courts, and other institutions, such as being fired, rescinding of awards and honors, and legal sanctions. But in this post, I want to focus on one form of “unofficial” sanction that could be imposed by individuals: not citing the work of sexual harassers and others guilty of bad behavior.

There’s a lot of debate in the humanities right now over whether or when to cite the work of sexual harassers or others who’ve behaved badly (note that I link to that only for its summary of the debate; I disagree with some of the author’s opinions on the debate). In the sciences, I kind of feel like the issues are fairly straightforward, although perhaps that just shows I haven’t reflected on them sufficiently carefully. So I’m going to think out loud here. Basically, I think it comes down to the purpose of the citation:

Continue reading

What academics can learn from business III: good meeting culture

This is the third in a series of things I think academia would do well to look to and learn from business (also see how many business hats an academic wears and business advice books). When I left the business world and went back to graduate school in 1997, there were many things I liked better about the academic culture. But there was one thing that jumped out at me as immediately badly flawed in academic culture: meetings. Everything about them – when held, why held, how held. To be sure a good meeting is a combination of artful guidance by its leaders and participants and a bit of luck. But there are some clear rules of thumb that help.

Continue reading