Also this week: Paul Krugman vs. scientist public intellectuals, David Attenborough vs. raves, Freeman Dyson was right about free parameters and elephants, new open ecology blog, and more.
At lots of different places! For the details, read on.
Some social science fields are quite hierarchical when it comes to faculty hiring. There’s a widely-agreed ranking of graduate programs, which have a strong tendency to hire faculty only from programs of similar or higher rank. As we’ve discussed, there are some defensible reasons for that, but probably also bad reasons. The same is true for computer science, business, and history. For instance, in those three disciplines, the top 10 programs train >70% of all US tenure track faculty, and only about 10% of US faculty are hired at institutions ranked higher than the one from which they obtained their PhD.
Anecdotally, my impression has always been that academic hiring in ecology (and other life science fields) is much less hierarchical. That the name of the institution from which you received your PhD isn’t considered by search committees, and correlates only loosely or not at all with the many things that search committees do consider. But why rely on anecdotal impressions when you have data?
So I went back to my pretty darn extensive list of people who were hired as tenure-track asst. professors in ecology and allied fields at N. American colleges and universities in 2016-17 (or 2015-16 in a very few cases). I was able to identify where 157 of those newly-hired ecology faculty got their PhDs (having tried to identify every single one). I was interested in the following questions:
- Do the graduates of a few “top” ecology programs comprise a really disproportionate share of newly-hired ecology faculty?
- Do “top” ecology programs exhibit a disproportionate tendency to hire faculty from other “top” programs?
So, I was procrastinating the other day and decided to compile data on where N. American tenure-track asst. professors of ecology (or allied fields like fish & wildlife) hired in 2016-17 got their PhDs. Basically, did they all get their PhDs from UC Davis, or what?* Because you can’t throw a rock over there without hitting an ecology grad student.**
Anyway, before I post the results, I’m curious what you think I found. Take 30 seconds and answer the following questions! I hope you do better than the people who responded to my silly Twitter polls on this topic. 😉
This is not a scientific poll, obviously (especially since I kinda gave away the answers on Twitter…). Just a hopefully-fun conversation starter.
*Just kidding, I didn’t really wonder this.
**Please don’t actually try this. 🙂
As you know, this year I had the privilege of chairing the ASN Jasper Loftus-Hills Young Investigator Awards committee. Every year the ASN gives not just one but four awards to outstanding young researchers doing integrative research in any area of ecology, evolution, behavior, or genetics. The award is in memory of Jasper Loftus-Hills, a promising young scientist who died tragically 3 years after receiving his Ph.D. The YIA has a proud history of going to investigators who go on to become leaders in their fields. This year, we’re pleased to add Rachel Germain, Aaron Comeault, Rachael Bay, and Gijsbert Werner to that illustrious list. Congratulations all!
Below the fold, some comments on the award and the process by which the committee (Luke Harmon, Janneke Hille Ris Lambers, Renee Duckworth, and myself) came to our decision.
Also this week: another social science journal bans p-values, figure aspect ratio vs. your data, JBS Haldane vs. Hannibal Lecter, and more.
Like many scientific fields, ecology has awards to recognize outstanding scientists and their work. Think of the ESA’s various awards: the Buell and Braun awards for best student talks and papers at the ESA Annual Meeting, the Mercer Award for the best ecology paper by a young author, the Eminent Ecologist Award, etc. I recently chaired the ASN’s Jasper Loftus-Hills Young Investigator Awards committee; the ASN gives other awards too. There’s the Crafoord Prize, the closest thing ecology has to the Nobel Prize. Various awards given by the BES. And so on.
The vast majority of these awards are for individuals. Even awards that can go to collaborative work, such as the Mercer Award, mostly go to individuals or small groups, not to large working groups or multi-lab collaborations. And although the people receiving individual awards often have participated in collaborative projects, you can’t win an individual award unless you’ve done a critical mass of work that’s primarily “yours”.
Which arguably is an increasingly-large lacuna in the awards available to ecologists. Do we need a new award specifically for collaborative work? Especially collaborations that are so large and complex it would seem like a distortion to single out an individual or small number of individuals to receive the credit that really ought to go to the group as a whole. Like how it would seem silly (at least to me) to give a Nobel Prize for the Large Hadron Collider’s discovery of the Higgs boson to a single individual, given that the LHC employs something like 4000 people. Or how it would seem silly to give any one employee of a large corporation (even the CEO) full credit for the corporation’s profits.
To be clear, I suggest this not because I think ecology lacks sufficient incentive for collaborative work. I think the incentives to engage in collaborative work are just fine, as evidenced by how much of it there is these days. Plus, I don’t think ecological awards function as incentives for ecologists to do work they wouldn’t otherwise have done. Nobody does their work any differently in an attempt to win the Mercer Award or the Crafoord Prize or whatever. I just think it’s nice to recognize outstanding ecology and the ecologists who do it. These days, that includes outstanding work being done by large, complex collaborations.
What do you think?
p.s. Many readers don’t notice who wrote which post. I predict that this post will be widely misattributed to Brian. 🙂
According to a text mining analysis of the papers ecologists publish, the number of p-values per paper has increased about 10-fold from 1970 to 2010. Where 0 p-values was sufficient to get a paper published in 1930, about 1 p-value per paper was expected to be published in 1970, and now about 10 p-values per paper are needed in the 2010s (Low-Decarie et al 2014 Figure 2). Our science must now be at least 10 times as rigorous! The only thing in the way of the p-value juggernaut is AIC which has been gaining at the expense of slowing down p-value growth. I’ve already shared my opinions that AIC is appealing to ecologists for some not so good reasons. Here I want to argue that we have gotten into some pretty sloppy thinking about p-values a well. Continue reading
Also this week: the story behind Schluter & McPhail 1992, the current state of play in science blogging, and more.
Academics generate a lot of intellectual property (IP for short). Arguably it is the main thing we do aside from teaching. And the IP landscape is changing rapidly both in and out of academia. This is yet-another-thing academics are supposed to be excellent at without any formal training. I don’t have extensive training, but I spent 10 years working in the software world and often was the lead business person working with lawyers to negotiate software contracts. So I have thought about these topics and how they are evolving. They seem to be evolving in some directions that don’t make sense to me. So I thought I would write a brief guide to the issues and raise some of the concerns I have. Continue reading
As scientists, we often judge research on two criteria: how good was the question (interesting? important? etc.), and how convincing was the answer?
But often, other criteria creep in. For instance (and this is just one example), the cleverness, elegance, or creativity of the methods.
We all have our favorite examples. Meghan highlighted Jasmine Crumsey and others before her who’ve used medical CT scanners to reconstruct earthworm burrow systems. As Meghan wrote, how cool is that? 🙂 She’s also highlighted the amusing example of using vibrators to mimic buzz pollination. I like the recent example of researchers who figured out that dung beetles navigate by the Milky Way by putting dung beetles in a planetarium. It strikes me as not just an effective test of the hypothesis, but a very creative test. It never would’ve occurred to me to do that! And when I talk to undergrads about my research to recruit them to work in my lab, they’re often really intrigued by the idea of growing protists in jars to test general ecological principles. “How did anyone ever think of doing that?! That’s really neat! ” is a common reaction.
There’s usually no harm, and plenty of enjoyment, in appreciating the cleverness, elegance, and creativity of somebody’s methods–as long as it doesn’t color our judgements about their effectiveness.
For instance, in economics instrumental variables have become a standard method for inferring causality from observational data. The basic idea of instrumental variables is to exploit what ecologists would call “natural experiments”: naturally occurring exogenous variation in some driver variable that perturbs the causal relationship of interest. The method of instrumental variables is popular approach in part because it’s seen as rigorous–but also in part because it’s seen as clever. I only know what I read on economics blogs, so take what I’m about to say with a whole salt shaker’s worth of salt. But anecdotally it’s common to see economists praised for particularly ingenious choices of instrumental variable. Using the Colombian colonial royal road network to estimate the causal effect of government on economic development, for instance. How did anyone ever think of doing that? That’s really clever! But an idea doesn’t necessarily work just because it’s clever. In fact, empirical economics is having a mini-crisis right now because of a major review paper showing quite convincingly that, in practice, instrumental variables in economics are usually worse than less clever, less “rigorous” approaches. It looks to me like many economists cared too much about cleverness of instrumental variable choice, and not enough about the quality of the resulting inferences.
What do you think? How should we value the creativity, elegance, or cleverness of a paper’s methods, particularly in relation to other desiderata?