My collaborators and I just published “Population extinctions can increase metapopulation persistence“. New Scientist did a piece on it, which is the first time any media outlet other than my local newspaper has written up my work. I’m chuffed about this, because I think this is the coolest paper I’ve ever done by some distance.
Or, maybe it’s just a cute result–a fun curiosity. I could even imagine someone arguing that it’s oversold fluff. So why do I think it’s so cool? And what’s the difference between “cool” and “cute”?
Research has demonstrated that science benefits from diversity, but graduate programs still suffer from a lack of diversity, including in terms of race/ethnicity and the type of undergraduate institutions of applicants. Meanwhile, minority-serving institutions are full of students who are talented and passionate about science. Faculty members at these institutions are dedicated to their students and work to connect them with opportunities. But, at the same time, those faculty members are often overextended (unfortunately, minority serving institutions tend to be underresourced) and simply do not have the time to mentor all of their promising students through the process of applying to graduate schools and fellowship programs, including the National Science Foundation Graduate Research Fellowship Program and the Ford Foundation Predoctoral Fellowship. Moreover, most of these institutions primarily serve undergraduates and there is little access to graduate students and postdocs who can serve as mentors and role models.
In other words: graduate programs are looking to recruit more minority scholars, fellowship programs are looking for bright applicants, and minority serving institutions are full of students who are ready to excel in graduate school and research. But, right now, many of those students from minority-serving institutions don’t apply to graduate programs or for graduate research fellowships.
Therefore, we* have created EEB Mentor Match, with the goal of matching undergraduate students from minority-serving institutions (MSIs) who are interested in ecology and evolutionary biology (EEB) with mentors who can provide feedback on graduate school and fellowship applications. We are looking for:
- undergraduate students who are considering applying to graduate schools in ecology and evolutionary biology (defined broadly, including programs in conservation biology, natural resources, etc.) and/or to the National Science Foundation’s Graduate Research Fellowship Program and/or to the Ford Foundation Predoctoral Fellowship;
- masters students who are planning to apply to PhD programs in ecology and evolutionary biology (defined broadly, including programs in conservation biology, natural resources, etc.) and/or to the National Science Foundation’s Graduate Research Fellowship Program and/or to the Ford Foundation Predoctoral Fellowship;
- graduate students, postdocs, faculty, and others with experience with the graduate school application process and/or NSF’s GRFP and/or Ford Foundation Predoctoral Fellowships who are interested in working an undergraduate student from a minority serving institution as they craft their application materials; and
- mentors of students at MSIs who can nominate students who are considering applying to graduate school in EEB and/or for fellowships. We will then contact these students to see if they are interested in being mentored and, if so, pair them with a mentor.
Note that this is focused on students who are interested in ecology & evolutionary biology (defined broadly, including programs in conservation biology and natural resources). Our hope is that, by keeping this more focused, we will be able to do a better job of matching mentors and mentees. (Also, there are only so many hours in the day, unfortunately.) We encourage people in other research areas to develop similar resources for their fields!
I recently read Superforecasting: The Art and Science of Prediction by Philip Tetlock and Dan Gardner. Here’s my review.
tl;dr: It’s good, and will get you thinking about how its conclusions apply to your own scientific work.
I am pretty much through with revisions to my manuscript on authorship, with one exception. One of the reviewers is (quite reasonably) pushing me to make a stronger recommendation about how authorship decisions should be made in the increasingly common case of collaborations between groups. But, of course, this is a tricky issue, and I’m waffling on what exactly to recommend. This blog post is me trying to work through that, and looking for feedback at the end. I’m quite interested in hearing how others think decisions about authorship should be made when multiple groups collaborate substantially on a project!
I’ll start by recapping some of what my results, since they set up the general question. Then, I’ll give some of my thoughts on what might be a proposed solution. And, as I said above, I’ll end by asking for feedback on what I propose.
I have an embarrassing confession: I’m just not that into
you trait-based ecology.
Which doesn’t feel like confessing a murder, but does feel like confessing, I dunno, not liking Groundhog Day.* It’s slightly embarrassing. For years now trait-based ecology has been one of the biggest and fastest-growing bandwagons in ecology. Plenty of terrific ecologists whom I really respect are really into it. Which doesn’t mean that I have to be into it too, of course–but which does mean that if I’m not into it, I’d better have a good reason.
Which is a problem, because honestly I’m not sure why I’m not into it. In a field like ecology, where there’s no universal agreement as to what questions are most important to ask or exactly how to go about answering them, I think it becomes more (not less) important that each of us be able to justify our chosen question and approach, in terms that others can appreciate if not necessarily agree with. And also justify not liking any questions or approaches we don’t like. It really bugs me when people object to my own favorite approach for weak reasons that don’t stand up to even casual scrutiny. So I’m embarrassed to admit that there’s lots of trait-based ecology that I just vaguely think of as uninteresting or not likely to go anywhere, even though honestly I don’t know enough about it to really have an informed opinion. It’s embarrassing to not have an informed opinion on what’s probably the most popular current approach to topics that I care a lot about (e.g., species diversity, composition, and coexistence along environmental gradients).
This post is my attempt to do better. I want to think out loud about what I like and don’t like about trait-based ecology. My selfish goal is to clarify my own thinking, and to get comments that will teach me something and help me think better. My less-selfish hope is that buried somewhere within my half-formed thoughts are some useful ideas that trait-based ecology could take on board.
Here’s my plan: I’m going to talk about a body of work in trait-based ecology that I actually do know well and that I do like a lot. Then I’m going to go back to Brian’s old post on where trait-based ecology is at and where it ought to go and see how this body of work stacks up. How do my reasons for liking this particular body of trait-based ecology line up with what an actual trait-based ecologist–Brian–looks for in trait-based ecology?
In an old post I talked about how the falsehood of our models often is a feature, not a bug. One of the many potential uses of false models is as a baseline. You compare the observed data to data predicted by a baseline model that incorporates some factors, processes, or effects known or thought to be important. Any differences imply that there’s something going on in the observed data that isn’t included in your baseline model.* You can then set out to explain those differences.
Ecologists often recommend this approach to one another. For instance (and this is just the first example that occurred to me off the top of my head), one of the arguments for metabolic theory (Brown 2004) is that it provides a baseline model of how metabolic rates and other key parameters scale with body size:
The residual variation can then be measured as departures from these predictions, and the magnitude and direction of these deviations may provide clues to their causes.
Other examples abound. One of the original arguments for mid-domain effect models was as a baseline model of the distribution of species richness within bounded domains. Only patterns of species richness that differ from those predicted by a mid-domain effect “null” model require any ecological explanation in terms of environmental gradients, or so it was argued. The same argument has been made for neutral theory–we should use neutral theory predictions as a baseline, and focus on explaining any observed deviations from those predictions. Same for MaxEnt. I’m sure many other examples could be given (please share yours in the comments!)
This approach often gets proposed as a sophisticated improvement on treating baseline models like statistical null hypotheses that the data will either reject or fail to reject. Don’t just set out to reject the null hypothesis, it’s said. Instead, use the “null” model as a baseline and explain deviations of the observed data from that baseline.
Which sounds great in theory. But here’s my question: how often do ecologists actually do this in practice? Not merely document deviations of observed data from the predictions of some baseline model (many ecologists have done that), but then go on to explain them? Put another way, when have deviations of observed data from a baseline model ever served as a useful basis for further theoretical and empirical work in ecology? When have they ever given future theoreticians and empiricists a useful “target to shoot at”?
Different fields and subfields of science have different methodological traditions. Standard approaches that remain standard because students learn them.
Which to some extent is inevitable. Fields of inquiry wouldn’t exist if they had to continuously reinvent themselves from scratch. You can’t literally question everything. Further, tradition is a good thing to the extent that it propagates good practices. But it’s a bad thing to the extent that it propagates bad practices.
Of course, it’s rare that any widespread practice is just flat-out bad. Practices don’t generally become widespread unless there’s some good reason for adopting them. But even widespread practices have “occupational hazards”. Which presumably are difficult to recognize precisely because the practice is widespread. Widespread practices tend to lack critics. Criticisms of widespread practices tend to be ignored or downplayed on the understandable grounds of “nothing’s perfect” and “better the devil you know”.
Here’s one way to help you recognize when a widespread practice within your own field may be ripe for rethinking: look at whether the practice is used in other fields, and if not, what practices those other fields use instead to address the same problem. Knowing how things are done in other fields helps you look at your own field with fresh eyes.
I use R. I like it. I especially like the versatility and convenience it gets from add-on packages. I use R packages to do some fairly nonstandard things like fit vector generalized additive models, and simulate ordinary differential equations and fit them to data.
You can probably tell there’s a “but” coming.
Scientific ideas can have various virtues. Most obviously, they can be correct. But they can also be clever, surprising, elegant, etc.
One important but difficult-to-pin-down virtue is fruitfulness. A scientific idea is fruitful if it leads to a lot of further research, especially if that research retains long-term value (it wasn’t just a trendy bandwagon or whatever). Fruitfulness overlaps a lot with influence.
Fruitfulness or influence covaries positively with correctness, but not perfectly. It would be nice if the covariance were perfect. It’s unfortunate when an influential idea turns out to be wrong, because the work that grew out of that idea often loses at least some of its value, and because there’s an unavoidable opportunity cost to building on ideas that turn out to be wrong. Andrew Hendry has a compilation of ecological and evolutionary ideas that inspired a lot of research despite being (in Andrew’s view) wrong, or at least not all that important.
In this post I’m interested in the flip side of incorrect-but-influential ideas: ideas that were correct but not influential. Somebody said something true–but nobody else cared. Correct but non-influential ideas are the proverbial tree falling in a forest that doesn’t make a sound.
What are your favorite examples of correct-but-uninfluential ideas in ecology? In all of science?
One way among many others by which a theoretician might develop a mathematical model of one scenario is by analogy with some other scenario that we already know how to model.
The effectiveness of this approach depends in part on how loose the analogy is. At the risk of shameless self-promotion, I’ll highlight a physical analogy that my own work draws on (the analogy isn’t originally mine): dispersal synchronizes spatially-separated predator-prey cycles for the same reason that physical coupling synchronizes physical oscillators. Here’s a standard, and very cool, demonstration involving metronomes sitting on a rolling platform. The analogy between the ecological system and the physical system is actually fairly close, though for reasons that might not be immediately apparent (how come coupling via dispersal works like coupling via a rolling platform?) The closeness of the analogy is why it works so well (Vasseur and Fox 2009, Fox et al. 2011, Noble et al. 2015, and see Strogatz and Stewart 1993 for a non-technical review of coupled oscillators in physics, chemistry, and biology).
But it’s more common for physical analogies in ecology to be quite loose, justified only by verbal argument. Hence my question (and is is an honest question, not a rhetorical one): can you think of any examples in ecology in which models based on loose physical analogies have worked, for any purpose? Sharpening of intuition, quantitative prediction, generation of hypotheses that are useful to test empirically, etc.? Because I can’t.