Who should be senior author on papers resulting from collaborations between multiple research groups?

I am pretty much through with revisions to my manuscript on authorship, with one exception. One of the reviewers is (quite reasonably) pushing me to make a stronger recommendation about how authorship decisions should be made in the increasingly common case of collaborations between groups. But, of course, this is a tricky issue, and I’m waffling on what exactly to recommend. This blog post is me trying to work through that, and looking for feedback at the end. I’m quite interested in hearing how others think decisions about authorship should be made when multiple groups collaborate substantially on a project!

I’ll start by recapping some of what my results, since they set up the general question. Then, I’ll give some of my thoughts on what might be a proposed solution. And, as I said above, I’ll end by asking for feedback on what I propose.

Continue reading

Why functional trait ecology needs population ecology

I have an embarrassing confession: I’m just not that into you trait-based ecology.

Which doesn’t feel like confessing a murder, but does feel like confessing, I dunno, not liking Groundhog Day.* It’s slightly embarrassing. For years now trait-based ecology has been one of the biggest and fastest-growing bandwagons in ecology. Plenty of terrific ecologists whom I really respect are really into it. Which doesn’t mean that I have to be into it too, of course–but which does mean that if I’m not into it, I’d better have a good reason.

Which is a problem, because honestly I’m not sure why I’m not into it. In a field like ecology, where there’s no universal agreement as to what questions are most important to ask or exactly how to go about answering them, I think it becomes more (not less) important that each of us be able to justify our chosen question and approach, in terms that others can appreciate if not necessarily agree with. And also justify not liking any questions or approaches we don’t like. It really bugs me when people object to my own favorite approach for weak reasons that don’t stand up to even casual scrutiny. So I’m embarrassed to admit that there’s lots of trait-based ecology that I just vaguely think of as uninteresting or not likely to go anywhere, even though honestly I don’t know enough about it to really have an informed opinion. It’s embarrassing to not have an informed opinion on what’s probably the most popular current approach to topics that I care a lot about (e.g., species diversity, composition, and coexistence along environmental gradients).

This post is my attempt to do better. I want to think out loud about what I like and don’t like about trait-based ecology. My selfish goal is to clarify my own thinking, and to get comments that will teach me something and help me think better. My less-selfish hope is that buried somewhere within my half-formed thoughts are some useful ideas that trait-based ecology could take on board.

Here’s my plan: I’m going to talk about a body of work in trait-based ecology that I actually do know well and that I do like a lot. Then I’m going to go back to Brian’s old post on where trait-based ecology is at and where it ought to go and see how this body of work stacks up. How do my reasons for liking this particular body of trait-based ecology line up with what an actual trait-based ecologist–Brian–looks for in trait-based ecology?

Continue reading

Have ecologists ever successfully explained deviations from a baseline “null” model?

In an old post I talked about how the falsehood of our models often is a feature, not a bug. One of the many potential uses of false models is as a baseline. You compare the observed data to data predicted by a baseline model that incorporates some factors, processes, or effects known or thought to be important. Any differences imply that there’s something going on in the observed data that isn’t included in your baseline model.* You can then set out to explain those differences.

Ecologists often recommend this approach to one another. For instance (and this is just the first example that occurred to me off the top of my head), one of the arguments for metabolic theory (Brown 2004) is that it provides a baseline model of how metabolic rates and other key parameters scale with body size:

The residual variation can then be measured as departures from these predictions, and the magnitude and direction of these deviations may provide clues to their causes.

Other examples abound. One of the original arguments for mid-domain effect models was as a baseline model of the distribution of species richness within bounded domains. Only patterns of species richness that differ from those predicted by a mid-domain effect “null” model require any ecological explanation in terms of environmental gradients, or so it was argued. The same argument has been made for neutral theory–we should use neutral theory predictions as a baseline, and focus on explaining any observed deviations from those predictions. Same for MaxEnt. I’m sure many other examples could be given (please share yours in the comments!)

This approach often gets proposed as a sophisticated improvement on treating baseline models like statistical null hypotheses that the data will either reject or fail to reject. Don’t just set out to reject the null hypothesis, it’s said. Instead, use the “null” model as a baseline and explain deviations of the observed data from that baseline.

Which sounds great in theory. But here’s my question: how often do ecologists actually do this in practice? Not merely document deviations of observed data from the predictions of some baseline model (many ecologists have done that), but then go on to explain them? Put another way, when have deviations of observed data from a baseline model ever served as a useful basis for further theoretical and empirical work in ecology? When have they ever given future theoreticians and empiricists a useful “target to shoot at”?

Continue reading

Does any field besides ecology use randomization-based “null” models?

Different fields and subfields of science have different methodological traditions. Standard approaches that remain standard because students learn them.

Which to some extent is inevitable. Fields of inquiry wouldn’t exist if they had to continuously reinvent themselves from scratch. You can’t literally question everything. Further, tradition is a good thing to the extent that it propagates good practices. But it’s a bad thing to the extent that it propagates bad practices.

Of course, it’s rare that any widespread practice is just flat-out bad. Practices don’t generally become widespread unless there’s some good reason for adopting them. But even widespread practices have “occupational hazards”. Which presumably are difficult to recognize precisely because the practice is widespread. Widespread practices tend to lack critics. Criticisms of widespread practices tend to be ignored or downplayed on the understandable grounds of “nothing’s perfect” and “better the devil you know”.

Here’s one way to help you recognize when a widespread practice within your own field may be ripe for rethinking: look at whether the practice is used in other fields, and if not, what practices those other fields use instead to address the same problem. Knowing how things are done in other fields helps you look at your own field with fresh eyes.

Continue reading

There are too many overspecialized R packages

I use R. I like it. I especially like the versatility and convenience it gets from add-on packages. I use R packages to do some fairly nonstandard things like fit vector generalized additive models, and simulate ordinary differential equations and fit them to data.

You can probably tell there’s a “but” coming.

Continue reading

What correct scientific idea hasn’t yet proven fruitful or influential, but will in future?

Scientific ideas can have various virtues. Most obviously, they can be correct. But they can also be clever, surprising, elegant, etc.

One important but difficult-to-pin-down virtue is fruitfulness. A scientific idea is fruitful if it leads to a lot of further research, especially if that research retains long-term value (it wasn’t just a trendy bandwagon or whatever). Fruitfulness overlaps a lot with influence.

Fruitfulness or influence covaries positively with correctness, but not perfectly. It would be nice if the covariance were perfect. It’s unfortunate when an influential idea turns out to be wrong, because the work that grew out of that idea often loses at least some of its value, and because there’s an unavoidable opportunity cost to building on ideas that turn out to be wrong. Andrew Hendry has a compilation of ecological and evolutionary ideas that inspired a lot of research despite being (in Andrew’s view) wrong, or at least not all that important.

In this post I’m interested in the flip side of incorrect-but-influential ideas: ideas that were correct but not influential. Somebody said something true–but nobody else cared. Correct but non-influential ideas are the proverbial tree falling in a forest that doesn’t make a sound.

What are your favorite examples of correct-but-uninfluential ideas in ecology? In all of science?

Continue reading

Has any ecological model based on a loose physical analogy ever worked?

One way among many others by which a theoretician might develop a mathematical model of one scenario is by analogy with some other scenario that we already know how to model.

The effectiveness of this approach depends in part on how loose the analogy is. At the risk of shameless self-promotion, I’ll highlight a physical analogy that my own work draws on (the analogy isn’t originally mine): dispersal synchronizes spatially-separated predator-prey cycles for the same reason that physical coupling synchronizes physical oscillators. Here’s a standard, and very cool, demonstration involving metronomes sitting on a rolling platform. The analogy between the ecological system and the physical system is actually fairly close, though for reasons that might not be immediately apparent (how come coupling via dispersal works like coupling via a rolling platform?) The closeness of the analogy is why it works so well (Vasseur and Fox 2009, Fox et al. 2011, Noble et al. 2015, and see Strogatz and Stewart 1993 for a non-technical review of coupled oscillators in physics, chemistry, and biology).

But it’s more common for physical analogies in ecology to be quite loose, justified only by verbal argument. Hence my question (and is is an honest question, not a rhetorical one): can you think of any examples in ecology in which models based on loose physical analogies have worked, for any purpose? Sharpening of intuition, quantitative prediction, generation of hypotheses that are useful to test empirically, etc.? Because I can’t.

Continue reading

How far can the logic of shrinkage estimators be pushed? (Or, when should you compare apples and oranges?)

Scientists—and indeed scholars in any field—often have to choose how wide a net to cast when attempting to define a concept, estimate some quantity of interest, or evaluate some hypothesis. Is it useful to define “ecosystem engineering” broadly so as to include any and all effects of living organisms on their physical environments, or does that amount to comparing apples and oranges?* Should your meta-analysis of [ecological topic] include or exclude studies of human-impacted sites? Can microcosms and mesocosms be compared to natural systems (e.g., Smith et al. 2005), or are they too artificial? As a non-ecological example that I and probably many of you are worrying about these days, are there any good historical precedents for Donald Trump outside the US or in US history, or is he sui generis? In all these cases and others, there’s no clear-cut, obvious division between relevent information and irrelevant information, things that should be lumped together and things that shouldn’t be. Rather, there’s a fuzzy line, or a continuum. What do you do about that? Are there any general rules of thumb?

I have some scattered thoughts on this, inspired by the concept of “shrinkage” estimates in statistics:

Continue reading

Michael Rosenzweig: an appreciation

I am currently attending a Festschrift this week for Michael Rosenzweig. Make no mistake, he is still actively doing science, but with 50+ years of scientific career, it seems like a good time to reflect on what an impressive career he has had. Just for full disclosure upfront, he was my PhD adviser, so I’m hardly the most unbiased reporter, but of course that gives me a close perspective.

Mike was awarded the Ecological Society of America’s Eminent Ecologist award in 2008 and he has well over 100 papers, many massively cited, and three books, so I imagine many are familiar with his published work, and it would take too much space to summarize it anyway. I want to offer several more reflective and in some cases more personal thoughts. Take them as a reflection of my respect and appreciation for Mike or my musings on the ingredients of a good scientific career as you wish.

Continue reading