About Jeremy Fox

I'm an ecologist at the University of Calgary. I study population and community dynamics, using mathematical models and experiments.

Poll on co-authorship of papers using publicly available data

We talked recently about how Am Nat and other leading EEB journals are giving their data sharing policies teeth. Going forward, they’re going to require data sharing. And of course, for years now increasing numbers of ecologists have been posting the data underpinning their papers in public repositories such as Dryad, even if not strictly required to do so by journal policies.

This is an area in which scientific practices and norms are changing fast. I’m old enough to remember when there was no expectation that you’d share data you’d collected, much less that you’d be obliged to share it!

I’ve been wondering lately how data sharing rules are affecting people’s views on authorship. My own view is that data on public repositories are fair game. Anyone can download them and use them in any way, including in their own papers (for instance, meta-analyses), without offering co-authorship to those who originally collected and deposited the data. I think this view is consistent with the spirit of Dyad’s policies; data deposited on Dryad are made available under a Creative Commons Zero license. But it occurred to me that I have no idea if my own view is widely shared. And we know from past polling data that ecologists’ views on authorship issues don’t have much to do with official journal authorship policies. So perhaps ecologists’ views on authorship in relation to data on public repositories don’t have much to do with the repositories’ official policies.

So to get some anecdata on this, here’s a a very short anonymous poll. I’ll publish a summary of the responses in a future post.

The opposite of the decline effect?

The “decline effect” refers to scientific effects that appear to decrease in magnitude as more studies are conducted. For instance, early studies of some phenomenon might report large differences between treatment and control means, but later studies report small differences or no difference. Decline effects might arise because of publication bias, regression to the mean, and changes over time in true effect sizes. Reviews in ecology and other fields suggest that decline effects are common (for ecology see Jennions & Møller 2002, Barto & Rillig 2012).

Question: what should you call the opposite of a decline effect? And do you know of any examples?

No, I’m not being contrarian for the sake of being contrarian, I have a reason for asking. As I said in my talk at #ESA2020, I have reason to think that, in ecology, decline effects are no more common than, um, whatever the opposite of a decline effect is. So I want to know about any other reports of the opposite of a decline effect. Whether anecdotal, or from systematic reviews.

Just off the top of my head, it seem like heath effects of air pollution might be an example? I’m no expert, all I know is what I read in the news, but recent estimates seem to suggest that air pollution is worse for us than we used to think it was.

Scientific fraud vs. financial fraud: is there a scientific equivalent of “control fraud”?

Continuing my little series of posts on the analogies between scientific fraud and financial fraud, inspired by Dan Davies’ book Lying For Money. As with past posts in the series, the hope is that the looking at scientific fraud through the lens of financial fraud provides some novel and useful insights. Just thinking out loud here, trying ideas on for size.

Today: is there a scientific equivalent of a “control fraud”? For British readers, a more clickbait-y title for this post would be “Is there a scientific equivalent of the PPI mis-selling scandal?”

Continue reading

The story and lessons of the NutNet experiment: an interview with Elizabeth Borer

The Nutrient Network (“NutNet“) is a long-running, pioneering globally distributed field experiment. It now involves hundreds of researchers at hundreds of sites around the world, it’s published a bunch of important and influential papers, and it’s served as the inspiration and model for many other distributed experiments. I’m a huge fan of NutNet, not just because of all the great science that’s come out of it, but because it’s a new and very interesting model for how to do science (at least, new to me…) I’m also a fan of NutNet because it’s a very different sort of science than anything I do or would have even thought of doing. When I look at someone else’s protist microcosm experiment, I look at it with the eyes of a connoisseur, because I do protist microcosm experiments too. But when I look at NutNet, I’m just amazed, it’s like I’m looking at a magic trick or the Sagrada Família. How does someone do that? How does someone even think of doing it?

To answer those questions, I asked one of the someones who did it. 🙂 Elizabeth Borer is one of the co-founders of NutNet. In the same spirit as my interview a few years ago with Rich Lenski about his Long-Term Evolution Experiment, I emailed Elizabeth a bunch of questions about NutNet and she was kind enough to answer them. I hope you find her answers as interesting as I do (seriously, they’re super-interesting)!

Continue reading