A (crude) statistical profile of the research productivity of recently-hired N. American ecology asst. profs

In the course of my study of the gender balance of recently hired tenure tract asst. professors of ecology (and allied fields) in N. America, I also compiled data on the Google Scholar h indexes of the new hires. I did the same last year, for a haphazard selection of the new hires; those data are summarized briefly here. Here’s a summary of the combined dataset of all 218 recent hires who have Google Scholar pages, along with a few comments.

I’ll emphasize right up front that the h index is an extremely crude summary measure of research productivity. Perhaps its biggest limitation is giving individuals the same credit for being sole author and being one middle author among many. For this reason and others, I highly doubt that most faculty searches actually involve looking at applicants’ h indices, though some searches might look at other things that are loosely correlated with applicants’ h indices (e.g., whether the applicant has papers in leading journals). My only goal in this post is to give a very rough sense of what level of research productivity is required to be competitive for a tenure-track faculty position in ecology at different sorts of N. American institutions.

Note as well that many recent hires don’t have Google Scholar pages. That’s especially true for recent hires at smaller and less research-intensive institutions. People without Google Scholar pages likely tend to have lower h indices than people with Google Scholar pages. And as you’ll see, recent hires at less research intensive institutions tend to have lower h indices than recent hires at more research intensive institutions. So my data likely are an upwardly-biased estimate of the typical h indices of recent ecology hires.

Continue reading

Ask Us Anything: how to get invited to review manuscripts, and who writes the best reviews

A while back we invited you to ask us anything. Here are our answers to our next two questions, from Pavel Dodonov and Dave Fryxell.

Pavel asked: I like reviewing; how can I get more invitations to review manuscripts?

Dave asked: Who writes the best or most positive reviews in ecology? Grad students? postdocs? Senior faculty? Nonacademics?

Continue reading

A novel check on causal inference methods: test ridiculous causal hypotheses (UPDATED)

Just ran across an interesting paper from international relations (Chaudoin et al. 2016), with potential application to ecology.* It’s about the problem of “selection on unobservables”, also known as the problem of shared causes. For instance, you can’t tell that joining an international human rights treaty causes countries to respect human rights, because some possibly-unobserved causal factor that drives compliance with the treaty might also drive the initial decision to join. So that countries that join the treaty are those that would’ve respected human rights anyway. I’m sure you can think of analogous scenarios in ecology. Various methods have been proposed to deal with this and allow causal inferences from observational data (e.g., matched observations, statistical control using covariates, structural equation models, instrumental variables). But do those methods work in practice?

The linked paper takes an interesting approach to answer that question: it uses a standard causal inference method to estimate if joining the World Trade Organization or the Convention on Trade in Endangered Species has a “causal” effect on variables that nobody thinks are causally affected by international trade or trade in endangered species For instance, the paper asks if joining CITES causes a country to have a legislature. The authors find that membership in both treaties is estimated to have statistically and substantively “significant” effects on irrelevant variables an alarmingly high fraction of the time. Which suggest that standard methods of causal inference from observational data have alarmingly high false positive rates when applied to real world data (or else that researchers’ hypotheses about what causes what are completely useless).

I think it’d be very interesting to take a similar approach using ecological data and other methods of causal inference. For instance, if you fit structural equation models with some ridiculous causal structure to real ecological data, how often do you find “significant” and “strong” causal effects? And how well do the resulting SEMs fit observed ecological data, relative to the fit of SEMs based on “plausible” causal hypotheses? Has anyone ever done this in ecology? If not, it seems to me like low-hanging fruit well worth picking.**

Off the top of my head, I can think of a few ecology papers in the same spirit. For instance, Petchey et al. (2004) and Wright et al. (2006) tested whether conventional classifications of plant species into “functional groups” (e.g., C3 plants, C4 plants, forbs, etc.) are biologically meaningful. They did this by randomly reshuffling the functional groups into which real species were classified, and then checked whether the resulting ridiculous functional group classifications result in a significant relationship between functional diversity and ecosystem function. The answer is yes: randomly classifying species into biologically-meaningless functional groups often results in a “significant” relationship between functional group richness and ecosystem function, even after controlling for effects of species richness. And the relationship often is just as strong as the relationship with “real” functional groups. Which suggests that “real” functional groups aren’t so real after all. Ok, the Petchey et al./Wright et al. approach is slightly different than the one discussed above, in that it uses randomized data on possibly-relevant variables rather than non-randomized data on obviously-irrelevant variables. But the spirit is the same.

UPDATE: In the comments, Sarah Cobey reminds us that she and Ed Baskerville recently used the Chaudoin et al. approach to test a causal inference method known as convergent cross mapping. It failed badly.

I think the same approach could be much more widely used in ecology. Don’t just use causal inference on observational data to detect causes that seem like they might be real. Make sure your approach doesn’t detect causes that definitely should not be real.

*One of the best parts of being me is that I get to type weird sentences like that one.

**And if this is a really stupid idea, hopefully Jim Grace will stop by in the comments and say so. 🙂 One operational definition of “blog” is “place where you can share half-baked ideas, so that people who know better than you can tell you why they’re only half-baked.”

The two ways to keep your mathematical model simple

When you’re building a mathematical model, for any purpose (prediction, understanding, whatever), you have to keep it simple. You can’t literally model everything, any more than you can draw a map as big as the world itself. And if you want to get analytical results, as opposed to merely simulating the model, you have to keep it relatively simple.

There are two ways to keep things simple: leave stuff out, and leave stuff out while implicitly summarizing its effects.

Continue reading

Ask Us Anything: teaching evolutionary ecology, what statistics to learn, and what devices to read on

A while back, we invited you to ask us anything. Here are our answers to our next three questions, from sagitaninta (first two questions) and Matt Ricketts:

  1. Should evolutionary ecology be incorporated into introductory ecology courses, or left for advanced courses?
  2. What statistical tools should ecologists learn in order to avoid “defaulting” to simple, familiar tools like linear regression and t-tests?
  3. Do you read the literature on paper, electronic devices, or both? What tools do you like for this purpose?

Continue reading

What papers should be considered for the 2018 George Mercer Award? Nominate someone! (UPDATED)

The George Mercer Award is given annually by the ESA to an outstanding research paper published in the previous two years (so, 2016 or 2017 for this year’s award) with a lead author age 40 or younger at the time of publication. The age limit is in memory of George Mercer, a promising young ecologist who was killed in WW II.

I love awards like the Mercer Award. It’s great that the ESA recognizes outstanding work being done by up-and-coming ecologists. And thinking about potential nominees is a fun excuse to think about what makes for truly outstanding ecological research today. This would be a great topic for your next lab meeting: ask everyone suggest a nominee for the Mercer Award and then talk about them.

I have an old post looking back on past Mercer Award winners to look for common threads (more specific than, you know, “being a great paper”). So have a look at that post, and the list of past winners, if you want help forming a “search image”. Broadly speaking, Mercer Award winning papers tend to be those that powerfully combine multiple lines of evidence (often including both theory and data) to really nail what’s going on in some particular system, but in such a way as to also have much broader implications (e.g.). But there are exceptions, plus there’s no rule that says future winners have to be the same sorts of papers as past winners. In particular, it’s notable that only one review/synthesis/meta-analysis paper has ever won as far as I know. One of these years, surely we’ll see the award go to an outstanding working group paper led by a young author, or to a paper from an outstanding large collaboration like NutNet. Maybe this is the year?

So, what papers do you think should be in the conversation for the Mercer award this year? Here are three just off the top of my head, but I’m sure I’m forgetting a bunch of great papers by young authors so please add your favorites in the comments. And then follow through and nominate them!

  • LaManna et al. 2017 Science. A rare beast in ecology: a discovery of new and important large-scale patterns in empirical data (a latitudinal gradient in the strength of negative intraspecific density dependence, and in the degree to which density dependence and species abundances are correlated). Further, those patterns suggest an explanation for maybe the most famous pattern in all of ecology: the latitudinal gradient in species richness. There are challenging statistical issues in using observational data to estimate density dependence and so I’m not sure if everyone is yet totally convinced of the results. But this is clearly an important paper even if it’s not the final word.
  • Usinowicz et al. 2017 Nature. Speaking of latitudinal gradients in the strength of coexistence mechanisms: Usinowicz et al. use long-term monitoring data on seed recruitment in 10 forest plots spanning a long latitudinal gradient to estimate the strength of the temporal storage effect in each plot. They find that there’s a latitudinal gradient in the strength of the storage effect; it’s stronger at low latitudes because seed recruitment is more asynchronous there. This weakens interspecific relative to intraspecific competition and so promotes species coexistence. As with LaManna et al., there are challenging statistical issues here (it looks like they’re estimating a lot of parameters), and I haven’t yet dug deep into their data and supplements to satisfy myself that they’re actually estimating the storage effect and that their estimates are accurate. But if this is right it’s a very important result.
  • Hart et al. 2016 Ecology Letters. The Mercer award has never gone to a pure theory paper as far as I know. If it ever has it’s been a long time. Which doesn’t seem quite right. I mean, it often goes to papers that test or apply theory but don’t develop it. So why can’t it also go to papers that develop theory but don’t test or apply it? Hart et al. use simple models to demolish the common intuition that intraspecific variation should generally promote species coexistence by “blurring” interspecific differences in competitive ability. In fact, intraspecific variation usually inhibits species coexistence. This illustrates one of the most important tasks for theoreticians: correcting widespread pre-theoretical intuitions and replacing them with new, better intuitions. Before you read Hart et al. it’s hard to imagine how it could be right. After you read it, it’s hard to imagine how it could be wrong. The main limitation of the paper is its focus on purely ecological effects of intraspecific variation, thereby ignoring, e.g., selection depleting variation over time and eco-evolutionary feedbacks. But every paper has limitations, so nobody should hold it against Hart et al. that they don’t yet have a complete theory of intraspecific variation and coexistence. You can’t do everything in one paper.
  • Weiss-Lehman et al. 2017 Nature Communications. I love this sort of thing: taking full advantage of the power of a model system to do an experiment that teases apart the subtle (but very general) mechanisms underpinning a striking pattern. Weiss-Lehman et al. use an incisive microcosm experiment with flour beetles to explain why populations undergoing range expansion evolve both a higher mean rate of spread and a higher variance in spread rate. Randomly shuffling the spatial locations of individuals within populations without altering population density or demography revealed that spatial evolution is a key driver of the mean and variance of range expansion speed. And they didn’t stop there; they also did phenotypic trait assays to directly test for spatial evolution of movement propensity and demographic parameters. Likely to become a future textbook example of eco-evolutionary dynamics. Last year another great microcosm experiment on the same topic became the first microcosm paper to win the Mercer award (Williams et al. 2016). The Mercer’s never gone to papers on the same topic in consecutive years as far as I can recall (too lazy to check). But that doesn’t take anything away from Weiss-Lehman et al., plus there’s a first time for everything. (UPDATE: and see the comments; the post neglected to mention Ochocki & Miller 2017 Nature Communications, a third outstanding paper on this topic. All three papers were done in loose collaboration.)

Nominations for the Mercer Award and other annual ESA awards are due Oct. 19. Details here for all ESA awards, and further details here that are specific to the Mercer award.

This year our very own Meghan Duffy is chairing the Mercer Award subcommittee. She would love for you to nominate a paper! As we’ve discussed in the past, the Mercer Award subcommittee is not overwhelmed with nominations and takes every nomination very seriously no matter who it’s from. So you should definitely go ahead and nominate a paper, even if you’re a grad student or postdoc. Your nominee might well win! Plus, writing a Mercer award nomination letter is a good way to practice explaining to your colleagues why some bit of science is really great. Which is something you need to be good at if you expect to get grants and publish papers in selective journals.

If you’re not sure how to write a Mercer award nomination letter, here’s what you do: in 1-2 pages, summarize the paper for a broad audience (remember: possibly none of the committee members will be experts on the paper topic), and place it in a broader context to explain why it’s exemplary/novel/interesting/important science. Each of my little blurbs above could be expanded into a nomination letter.

Ask Us Anything: how to move into ecology from another discipline

A while back we invited you to ask us anything. Here are our answers to our next question, from Andrew Krause: what is your advice for those from other disciplines who have an interest in ecology? Particularly those interested in pursuing interdisciplinary work.

Continue reading

A cheap, self-published (but still excellent) ecology textbook: why not?

Note from Jeremy: this is a guest post from Mark Vellend.

************

The textbook I use for my undergraduate class in plant ecology now costs about $150 (it used to cost <$100).  I was alerted to this by the instructor who will be teaching the class for the next couple of years (while I have a fellowship to focus on research), and it immediately got me thinking again about ecology textbooks (see old DE posts here and here).  I have never much liked 500+ page books whose weight (>2kg) immediately doubles the shoulder strain of my backpack.  And $150 is an awful lot to more-or-less force students to pay. No wonder that students often don’t love them either.

To summarize my opinions on the downsides of textbooks, I find them:

  • Too long.
  • Too expensive.
  • Too heavy.
  • Overly packed with details (related to ‘too long’ and noted by several commenters on previous DE posts).
  • Temporally “inflexible” (big lag time between book written and book published; then the book is “set in stone” for at least the next 5 years).

To be clear, this is not a criticism of textbook writers, for whom I have immense respect and admiration (writing a good one is a massive accomplishment and contribution).  And of course textbooks have major upsides.  Textbooks can be:

  • Off-the-shelf course content for time-pinched teachers around the world (although there are typically many parts of any one book I want to teach differently).
  • A reference to consult when you need a refresher on a particular topic.
  • Markers of the state of a field for a given era.
  • Providers of a common template of understanding of what the field is about (i.e., they can help make ecology a coherent discipline), and so shapers of the field itself.

In short, I find big fat traditional textbooks very useful as a teacher, and I can also see great value for advanced (e.g., grad) students.  But undergrads?  I’m not so sure.  (And note that undergrads are where the money is made.)

And so my thoughts proceeded as follows:

Continue reading