About Jeremy Fox

I'm an ecologist at the University of Calgary. I study population and community dynamics, using mathematical models and experiments.

A happy ending to a tenure-track job search

Note from Jeremy: This is a guest post from Greg Crowther.

***********************

Previously I have whined about the difficulties of getting a good, stable college teaching job.  This whining is perhaps justified by the extremely low supply of these jobs relative to the demand.  But since almost everyone, including me, likes happy endings, I now wish to present a happy ending.  That’s right – I have received and accepted an offer for an ongoing full-time position.  At the age of 44, I have finally climbed aboard the tenure track.

Continue reading

What should high schoolers and undergrads learn about the scientific method?

Note from Jeremy: this is a guest post from Greg Crowther. Greg has a Ph.D. in biology and has held several teaching and research positions at the University of Washington and other Seattle-area colleges. He’s currently working on a master’s in science education.

***************************

I’ve never been inordinately curious about the natural world. As a kid, I did not spend long hours using a telescope or a home chemistry set, nor did I catch frogs in marshes or learn to identify species of local flora.  I got to high school, and then college, without any clear sense that I should become a scientist or that I would enjoy this particular vocation.

In my first four semesters of college, I took the usual variety of courses and grappled with their many fascinating questions.  Why did the Vietnam War start?  What do Buddhists really believe?  How did E.M. Forster’s novel Howards End illustrate his directive of “Only connect”?

Though fascinating, these questions also seemed horribly intractable.  One could cite evidence from a primary or secondary source to support one interpretation or another, but there didn’t seem to be any standard way of resolving disagreements besides deferring to the authority of the professor.

Science was different, though.  Professors presented the so-called “scientific method” as a fair, objective way of evaluating the strength of different possible explanations.  Accrue some background knowledge via reading and observation; pose a hypothesis; design an experiment to test the hypothesis; determine whether the data collected are consistent with the predictions of the hypothesis; and discard, modify, or retain the hypothesis as appropriate.

It all sounded so orderly, so sensible, so feasible.  Even if I did not have a great big hypothesis of my own, I could imagine taking someone else’s hypothesis out for a spin, say, using a species that hadn’t been studied yet.  This “scientific method” seemed simple enough for novices like me to follow, yet powerful enough to reveal fundamental insights about the world.  I was hooked – not on any particular molecule or technique or theory, but on the logical flow of the process itself.  I’ve considered myself a scientist ever since, and I now present the scientific method (often called the process of science) to my own students – because it’s relevant to their futures (whether or not they become scientists), but under-taught and poorly understood – more or less as it was presented to me.

“But wait!” cry various smart, articulate people such as Terry McGlynn and Brian McGill.  “That’s not how scientific research really works!”  Indeed, UC-Berkeley has an entire website, How Science Works, devoted to debunking and revising what it calls the “simplified linear scientific method.”

Continue reading

Friday links: math for human flourishing, smiley Charles Darwin, and more

Also this week: the scholarly literature as a mud moat, people named Neil vs. imposter syndrome, Joe Felsenstein was way ahead of you, compression algorithms vs. pop lyrics, induction vs. deduction vs. abduction, game theory vs. grade inflation, how to interview for a British PhD position, and MOAR.

Continue reading

Responding to “a post-fact world”: In defense of the honest broker

Note from Jeremy: this is a guest post from Peter Adler.

***************************

Last week Brian wrote a series of idea-rich posts about doing science in a post-fact world. In his final post, he concluded that scientists need to “Act more like other interest groups at the decision making table… Now that we’re no longer being accorded a special seat, we should sharpen our elbows and advocate strongly.” Although I agree with much, even most, of Brian’s three posts, I come to the opposite conclusion. Here are two argument in defense of the honest broker position.

Continue reading

Have ecologists ever successfully explained deviations from a baseline “null” model?

In an old post I talked about how the falsehood of our models often is a feature, not a bug. One of the many potential uses of false models is as a baseline. You compare the observed data to data predicted by a baseline model that incorporates some factors, processes, or effects known or thought to be important. Any differences imply that there’s something going on in the observed data that isn’t included in your baseline model.* You can then set out to explain those differences.

Ecologists often recommend this approach to one another. For instance (and this is just the first example that occurred to me off the top of my head), one of the arguments for metabolic theory (Brown 2004) is that it provides a baseline model of how metabolic rates and other key parameters scale with body size:

The residual variation can then be measured as departures from these predictions, and the magnitude and direction of these deviations may provide clues to their causes.

Other examples abound. One of the original arguments for mid-domain effect models was as a baseline model of the distribution of species richness within bounded domains. Only patterns of species richness that differ from those predicted by a mid-domain effect “null” model require any ecological explanation in terms of environmental gradients, or so it was argued. The same argument has been made for neutral theory–we should use neutral theory predictions as a baseline, and focus on explaining any observed deviations from those predictions. Same for MaxEnt. I’m sure many other examples could be given (please share yours in the comments!)

This approach often gets proposed as a sophisticated improvement on treating baseline models like statistical null hypotheses that the data will either reject or fail to reject. Don’t just set out to reject the null hypothesis, it’s said. Instead, use the “null” model as a baseline and explain deviations of the observed data from that baseline.

Which sounds great in theory. But here’s my question: how often do ecologists actually do this in practice? Not merely document deviations of observed data from the predictions of some baseline model (many ecologists have done that), but then go on to explain them? Put another way, when have deviations of observed data from a baseline model ever served as a useful basis for further theoretical and empirical work in ecology? When have they ever given future theoreticians and empiricists a useful “target to shoot at”?

Continue reading

Does any field besides ecology use randomization-based “null” models?

Different fields and subfields of science have different methodological traditions. Standard approaches that remain standard because students learn them.

Which to some extent is inevitable. Fields of inquiry wouldn’t exist if they had to continuously reinvent themselves from scratch. You can’t literally question everything. Further, tradition is a good thing to the extent that it propagates good practices. But it’s a bad thing to the extent that it propagates bad practices.

Of course, it’s rare that any widespread practice is just flat-out bad. Practices don’t generally become widespread unless there’s some good reason for adopting them. But even widespread practices have “occupational hazards”. Which presumably are difficult to recognize precisely because the practice is widespread. Widespread practices tend to lack critics. Criticisms of widespread practices tend to be ignored or downplayed on the understandable grounds of “nothing’s perfect” and “better the devil you know”.

Here’s one way to help you recognize when a widespread practice within your own field may be ripe for rethinking: look at whether the practice is used in other fields, and if not, what practices those other fields use instead to address the same problem. Knowing how things are done in other fields helps you look at your own field with fresh eyes.

Continue reading

Have you ever included the reviews of your rejected ms when resubmitting to another journal? (UPDATED)

It’s been widely suggested that one solution to the increasing difficulty of obtaining peer reviews is sharing of reviews among journals. If a ms is rejected by one journal, the ms (appropriately revised if necessary) and the reviews can be forwarded to another journal, which can make a decision without the need for further reviews. That’s the idea behind peer review cascades, such as how many Wiley EEB journals will offer to forward rejected mss and the associated reviews to Ecology & Evolution. It was also the idea behind the (late, lamented) independent editorial board Axios Review.

And it’s the idea behind a practice some folks were talking about on Twitter a little while back: authors themselves forwarding the reviews their rejected ms received to a new journal along with the revised ms.

Below the fold: a poll asking if you’ve ever done this, and then some comments from Meghan, Brian, and I. Answer the poll before you read the comments.

Continue reading

There are too many overspecialized R packages

I use R. I like it. I especially like the versatility and convenience it gets from add-on packages. I use R packages to do some fairly nonstandard things like fit vector generalized additive models, and simulate ordinary differential equations and fit them to data.

You can probably tell there’s a “but” coming.

Continue reading