One way among many others by which a theoretician might develop a mathematical model of one scenario is by analogy with some other scenario that we already know how to model.
The effectiveness of this approach depends in part on how loose the analogy is. At the risk of shameless self-promotion, I’ll highlight a physical analogy that my own work draws on (the analogy isn’t originally mine): dispersal synchronizes spatially-separated predator-prey cycles for the same reason that physical coupling synchronizes physical oscillators. Here’s a standard, and very cool, demonstration involving metronomes sitting on a rolling platform. The analogy between the ecological system and the physical system is actually fairly close, though for reasons that might not be immediately apparent (how come coupling via dispersal works like coupling via a rolling platform?) The closeness of the analogy is why it works so well (Vasseur and Fox 2009, Fox et al. 2011, Noble et al. 2015, and see Strogatz and Stewart 1993 for a non-technical review of coupled oscillators in physics, chemistry, and biology).
But it’s more common for physical analogies in ecology to be quite loose, justified only by verbal argument. Hence my question (and is is an honest question, not a rhetorical one): can you think of any examples in ecology in which models based on loose physical analogies have worked, for any purpose? Sharpening of intuition, quantitative prediction, generation of hypotheses that are useful to test empirically, etc.? Because I can’t.
Also this week: don’t save your R workspace, tell me again why the peer review system is in crisis, what economists (and ecologists?) don’t know, thought leaders vs. public intellectuals, William Carlos Williams vs. email, Jeremy channels his inner early-90s self, and more. Including an extra-large helping of silliness!
Note from Jeremy: this is a guest post by Greg Crowther.
We academics sure love to discuss authorship, don’t we? Previous posts on this blog have addressed authorship issues such as author order and criteria for authorship. The latter post dove deeply into the issue of defining what sorts of contributions are substantial enough to merit authorship. I thought this post and the corresponding comments were great . . . but too focused on one side of authorship at the expense of the other side.
Before I explain what I mean by that, consider the following mini-case studies:
If you look at my publications list, you’ll see that it doesn’t look up to date. The most recent paper on it came out in 2015. And it’s true that it’s not up to date–but only because I’m a co-author on a couple of papers that got accepted in the past week.
Which means that in terms of publishing papers, I went 0-for-2016. I went almost two years between acceptance letters.
We read All The Things this week, and you should too. 🙂 Such as: tell me again what “biodiversity” is and why it’s “good”, grad student mental health, real life bird-rabbit illusion, the most motivating grade ever given, meta-analysis of meta-analyses, does recruiting girls into STEM solve the wrong problem, stats vs. calculus, #thanksfortyping, and more!
Scientists—and indeed scholars in any field—often have to choose how wide a net to cast when attempting to define a concept, estimate some quantity of interest, or evaluate some hypothesis. Is it useful to define “ecosystem engineering” broadly so as to include any and all effects of living organisms on their physical environments, or does that amount to comparing apples and oranges?* Should your meta-analysis of [ecological topic] include or exclude studies of human-impacted sites? Can microcosms and mesocosms be compared to natural systems (e.g., Smith et al. 2005), or are they too artificial? As a non-ecological example that I and probably many of you are worrying about these days, are there any good historical precedents for Donald Trump outside the US or in US history, or is he sui generis? In all these cases and others, there’s no clear-cut, obvious division between relevent information and irrelevant information, things that should be lumped together and things that shouldn’t be. Rather, there’s a fuzzy line, or a continuum. What do you do about that? Are there any general rules of thumb?
I have some scattered thoughts on this, inspired by the concept of “shrinkage” estimates in statistics:
Also this week: how to increase graduation rates of students in financial need, Plos One’s surprisingly (?) high rejection rate, and more.
In a recent post, Stephen Heard noted that he signs most of his reviews because he wants authors to be able to contact him if they have any questions or want to discuss the review. Several commenters on Stephen’s post, and on Meg’s recent post on signing reviews, said they sign their reviews for the same reason (e.g.). And some of those commenters said that they have in fact been contacted by authors wanting to discuss the reviews.
All of which surprised me, because I’d never heard of this practice! The possibility of contacting a reviewer to discuss a review before responding to it had never even occurred to me, even though I’ve been an author and reviewer for 20 years now.
I’m still mulling over what I think about this practice. On the one hand, the reviewers who do it are trying to be helpful, and I’m sure the authors who contact them appreciate the help. On the other hand, that authors appreciate it is potentially a problem–I worry that the practice creates the opportunity for unethical quid pro quos. I’m not the only one who worries about this. So I dunno.
Anyway, I’m curious how common this practice is, and what ecologists as a group think of it. So below is a quick 3-question poll.
Mostly silliness this week: The ecology of Skull Island, electrofishing for whales, Boaty McBoatface goes forth, and more! Also, a few serious links on the March For Science, the role of facts in political debates, and more. Come for the links, stay to watch Meg and I squabble over them.
I was very surprised by the results of Meg’s recent poll on what reviewers mean when they say that, yes, they’d be willing to review a revised version of an ms. 34% mean not merely that they’re willing to review a revised version, but that they want to see a revised version to make sure the authors have addressed their concerns. Like Meg, I had no idea that reviewers who feel that way are such a large minority!
Which got me thinking about the roles of reviewers and editors, and if my own view on their roles isn’t as universal as I had (naively?) assumed. So below is a one-question poll. Do you see reviewers as advisers to the editor? Or do you think editors should ordinarily defer to reviewers, so that all reviewers should be satisfied before a paper is accepted for publication?