What are the biggest puzzles in ecology?

A scientific puzzle is anything that’s true without any obvious reason for it to be true, especially if there’s some obvious reason for it not to be true. Resolving scientific puzzles often leads to deep and important insights.

Evolutionary biology is full of puzzles, most of which have the form “Evolution by natural selection should produce X but yet we see Y. How come?” Examples include the surprisingly high frequency of sterile males, individuals that help unrelated individuals reproduce, and senescence. Resolving the puzzle usually involves figuring out why trait or behavior X actually is adaptive despite appearances to the contrary, as with individuals that help non-relatives reproduce. Or else why natural selection can’t purge it, as with senescence. Other classic puzzles in evolutionary biology include how complex adaptations like the human eye could evolve via small, piecemeal steps, and why there are usually only two sexes.

What are the biggest puzzles in ecology? Does ecology have as many puzzles as evolutionary biology? And if not, does that indicate a failing of ecology?

Continue reading

Imposter syndrome and cognitive distortions: some thoughts and poorly drawn cartoons

I’ve been thinking a lot about imposter syndrome lately – both because of feeling impostery myself, and because of seeing others who are feeling impostery. I find it helpful to realize how common it is for people to feel like imposters – sometimes I think that pretty much everyone is using the “fake it ‘til you make it” strategy. But it’s also disheartening when I realize that people who I think are fantastic scientists, teachers, and/or communicators also feel like frauds.

There are three particular flavors of imposter syndrome that I’ve particularly been thinking about. I wanted to write a post on them but surprisingly (to me, at least) I could only picture them in cartoon form. I suspect part of the reason for that is the influence of this really great cartoon on filtering out the positive and focusing on the negative. So, here are three poorly drawn cartoons on the topic. I feel a little silly sharing them (yes, of course I’m feeling impostery about a post on imposter syndrome!), but here goes:

Continue reading

A (crude) statistical profile of the research productivity of recently-hired N. American ecology asst. profs

In the course of my study of the gender balance of recently hired tenure tract asst. professors of ecology (and allied fields) in N. America, I also compiled data on the Google Scholar h indexes of the new hires. I did the same last year, for a haphazard selection of the new hires; those data are summarized briefly here. Here’s a summary of the combined dataset of all 218 recent hires who have Google Scholar pages, along with a few comments.

I’ll emphasize right up front that the h index is an extremely crude summary measure of research productivity. Perhaps its biggest limitation is giving individuals the same credit for being sole author and being one middle author among many. For this reason and others, I highly doubt that most faculty searches actually involve looking at applicants’ h indices, though some searches might look at other things that are loosely correlated with applicants’ h indices (e.g., whether the applicant has papers in leading journals). My only goal in this post is to give a very rough sense of what level of research productivity is required to be competitive for a tenure-track faculty position in ecology at different sorts of N. American institutions.

Note as well that many recent hires don’t have Google Scholar pages. That’s especially true for recent hires at smaller and less research-intensive institutions. People without Google Scholar pages likely tend to have lower h indices than people with Google Scholar pages. And as you’ll see, recent hires at less research intensive institutions tend to have lower h indices than recent hires at more research intensive institutions. So my data likely are an upwardly-biased estimate of the typical h indices of recent ecology hires.

Continue reading

Ask Us Anything: how to get invited to review manuscripts, and who writes the best reviews

A while back we invited you to ask us anything. Here are our answers to our next two questions, from Pavel Dodonov and Dave Fryxell.

Pavel asked: I like reviewing; how can I get more invitations to review manuscripts?

Dave asked: Who writes the best or most positive reviews in ecology? Grad students? postdocs? Senior faculty? Nonacademics?

Continue reading

A novel check on causal inference methods: test ridiculous causal hypotheses (UPDATED)

Just ran across an interesting paper from international relations (Chaudoin et al. 2016), with potential application to ecology.* It’s about the problem of “selection on unobservables”, also known as the problem of shared causes. For instance, you can’t tell that joining an international human rights treaty causes countries to respect human rights, because some possibly-unobserved causal factor that drives compliance with the treaty might also drive the initial decision to join. So that countries that join the treaty are those that would’ve respected human rights anyway. I’m sure you can think of analogous scenarios in ecology. Various methods have been proposed to deal with this and allow causal inferences from observational data (e.g., matched observations, statistical control using covariates, structural equation models, instrumental variables). But do those methods work in practice?

The linked paper takes an interesting approach to answer that question: it uses a standard causal inference method to estimate if joining the World Trade Organization or the Convention on Trade in Endangered Species has a “causal” effect on variables that nobody thinks are causally affected by international trade or trade in endangered species For instance, the paper asks if joining CITES causes a country to have a legislature. The authors find that membership in both treaties is estimated to have statistically and substantively “significant” effects on irrelevant variables an alarmingly high fraction of the time. Which suggest that standard methods of causal inference from observational data have alarmingly high false positive rates when applied to real world data (or else that researchers’ hypotheses about what causes what are completely useless).

I think it’d be very interesting to take a similar approach using ecological data and other methods of causal inference. For instance, if you fit structural equation models with some ridiculous causal structure to real ecological data, how often do you find “significant” and “strong” causal effects? And how well do the resulting SEMs fit observed ecological data, relative to the fit of SEMs based on “plausible” causal hypotheses? Has anyone ever done this in ecology? If not, it seems to me like low-hanging fruit well worth picking.**

Off the top of my head, I can think of a few ecology papers in the same spirit. For instance, Petchey et al. (2004) and Wright et al. (2006) tested whether conventional classifications of plant species into “functional groups” (e.g., C3 plants, C4 plants, forbs, etc.) are biologically meaningful. They did this by randomly reshuffling the functional groups into which real species were classified, and then checked whether the resulting ridiculous functional group classifications result in a significant relationship between functional diversity and ecosystem function. The answer is yes: randomly classifying species into biologically-meaningless functional groups often results in a “significant” relationship between functional group richness and ecosystem function, even after controlling for effects of species richness. And the relationship often is just as strong as the relationship with “real” functional groups. Which suggests that “real” functional groups aren’t so real after all. Ok, the Petchey et al./Wright et al. approach is slightly different than the one discussed above, in that it uses randomized data on possibly-relevant variables rather than non-randomized data on obviously-irrelevant variables. But the spirit is the same.

UPDATE: In the comments, Sarah Cobey reminds us that she and Ed Baskerville recently used the Chaudoin et al. approach to test a causal inference method known as convergent cross mapping. It failed badly.

I think the same approach could be much more widely used in ecology. Don’t just use causal inference on observational data to detect causes that seem like they might be real. Make sure your approach doesn’t detect causes that definitely should not be real.

*One of the best parts of being me is that I get to type weird sentences like that one.

**And if this is a really stupid idea, hopefully Jim Grace will stop by in the comments and say so. 🙂 One operational definition of “blog” is “place where you can share half-baked ideas, so that people who know better than you can tell you why they’re only half-baked.”

The two ways to keep your mathematical model simple

When you’re building a mathematical model, for any purpose (prediction, understanding, whatever), you have to keep it simple. You can’t literally model everything, any more than you can draw a map as big as the world itself. And if you want to get analytical results, as opposed to merely simulating the model, you have to keep it relatively simple.

There are two ways to keep things simple: leave stuff out, and leave stuff out while implicitly summarizing its effects.

Continue reading