On finding errors in one’s published analyses

Dan Bolnick just had a really important – and, yes, brave – post on finding an error in a published study of his that has led him to retract that study. (The retraction isn’t official yet.) In his post, he does a great job of explaining how the mistake happened (a coding error in R), how he found it (someone tried to recreate his analysis and was unsuccessful), what it means for the analysis (what he thought was a weak trend is actually a nonexistent trend), and what he learned from it (among others, that it’s important to own up to one’s failures, and there are risks in using custom code to analyze data).

This is a topic I’ve thought about a lot, largely because I had to correct a paper. It was the most stressful episode of my academic career. During that period, my anxiety was as high as it has ever been. A few people have suggested I should write a blog post about it in the past, but it still felt too raw – just thinking about it was enough to cause an anxiety surge. So, I was a little surprised when my first reaction to reading Dan’s post was that maybe now is the time to write about my similar experience. When Brian wrote a post last year on corrections and retractions in ecology (noting that mistakes will inevitably happen because science is done by humans and humans make mistakes), I still felt like I couldn’t write about it. But now I think I can. Dan and Brian are correct that it’s important to own up to our failures, even though it’s hard. Even though correcting the record is exactly how science is supposed to work (and I did corrected the paper as soon as I discovered the error), it still is something that is very hard for me to talk about.

Continue reading

Why am I a scientist again? – The concept of a data present

(This is a guest post from Isla Myers-Smith, early-ish career academic at the University of Edinburgh, with a conversation at the end with Gergana Daskalova, an undergraduate in her lab)

data_present

Sometimes I like to worry about why I have chosen a scientific career path and the meaning of life and big esoteric questions that really have no particular answer. I have wondered many times why do I push myself so hard to succeed in science? I know the pipeline is leaky for early career scientists and many choose to leave the Ivory Tower to make different contributions with their careers, but at least for now, I have stuck with the halls of academia and here is why.

Continue reading

Where do ideas come from, and what counts as “novel”?

Note from Jeremy: this is a guest post from Mark Vellend

**********

During my very first research experience in ecology (mid-1990s), Graham Bell, a famous evolutionary biologist, talked about the forest plants we were studying as if they were essentially just large and slow versions of the algae multiplying rapidly in the highly simplified test tubes of his lab. The other undergraduate field assistants and I (the “Carex crew”) – all of us thrilled to have paid jobs to tromp about in Wild Nature – felt that this perspective sucked all the beauty and wonder out of the forest that we so loved. But it stuck with me.

This is a second guest post expanding upon thoughts and some personal reflections that arose while I wrote a book on community ecology during sabbatical last year. The first post is here, and I couldn’t help noticing that it was given the tag of “New Ideas” by Jeremy. Hmmm…I wonder how we decide whether an idea is “new”? I think the answer has rather important implications for how we judge papers and the scientists that write them. All the top journals want “novelty”, but what is that exactly? And where do ideas come from in the first place?

Continue reading

Another attempt to stop or steer the phylogenetic community ecology bandwagon

I’m a bit late to this, which is embarrassing because I was involved in it. Back in May, Functional Ecology published a special feature (well, they call it an “extended spotlight”) on community phylogenetics. I helped edit the special feature, along with Anita Narwani, Patrick Venail, and Blake Matthews. Here’s our introductory editorial, which basically argues that phylogenetic community ecology has gone too far down the well-trodden road dead end of trying to infer process from pattern and that it’s high time for a course correction.

If it sounds rather like some old blog posts of mine (e.g., this and this), well, that’s no accident. It’s because of those old posts that Anita and Patrick invited me to join the team (they were the driving force behind this, having organized the symposium this special feature grew out of). So there’s a tangible benefit of blogging to add to the rather short list–you might get mistaken for an expert and invited to edit a special feature. 🙂 That my involvement in this project grew out of my blogging is my tissue-thin justification for posting about it.

The four papers in the special feature are quite different in terms of the specific topics addressed and the approaches used to address them. But they’re all nice examples of contrarian ecology, pushing back against the current conventional wisdom.

Kraft et al. use modern coexistence theory to rethink and make precise the disturbingly-popular-for-such-a-vague-idea notion of “environmental filtering”. They then review the literature and find that most studies of “environmental filtering” don’t actually present evidence of environmental filtering, properly defined. They argue that current vague usage of the term overstates the importance of abiotic tolerance in determining community composition. A nice example of something I’ve been thinking about a lot lately–how attempts to quantify vague concepts often just paper over the vagueness, leading to confusion rather than insight. One consequence of their argument (which I agree with 100%, btw) is to undermine a recently-proposed method for generating simulated datasets structured by a specified strength of environmental filtering. Which is kind of a funny coincidence, because the lead author of that method also wrote one of the papers in this special feature.

Gerhold et al. challenge the idea that the phylogenetic relatedness of co-occurring species can be used to infer the mechanisms driving community assembly. They point out that this idea depends on numerous strong assumptions that are weakly supported at best. They suggest more useful things that ecologists can do with phylogenies besides trying (futilely) to use them as a convenient shortcut to discovering community assembly mechanisms.

Venail et al. show, contrary to some recent claims, species richness, not phylogenetic diversity, predicts total biomass and temporal stability of total biomass in BDEF experiments with grassland plants.

Finally, Münkemüller et al. use evolutionary simulations to show that commonly-used measures of “phylogenetic niche conservatism”, such as phylogenetic signal, actually are very hard to interpret, and often are highly misleading guides to the underlying evolutionary processes governing niche evolution.

It will be interesting to see if these papers have much impact. I predict that Venail et al. will. It’s a comprehensive review of a purely empirical topic, and so I think it will quickly become the standard reference on that topic. The impact of Münkemüller et al. is harder to predict. My guess is it’ll get cited in passing a lot, but that people will mostly keep doing what they’ve been doing on the (dubious) grounds that there’s no easy alternative. I think Gerhold et al. and Kraft et al. will have little impact, unfortunately. They’re telling community ecologists to abandon an easy-to-follow recipe that purports to allow inference of process from pattern. Community ecologists only reluctantly abandon such recipes. But a minority of ambitious community ecologists will recognize that there’s an opportunity to do really-high impact work by following the lead of Kraft et al. rather than by following the crowd.

The editorial and the papers are open access, so check them out.

Old school literature searches and the fun of reading classic, non-English literature

In my post last week, I pointed out that I haven’t read nearly as much in the past semester as I’d hoped to read. But I did read some things! In fact, as far as I can tell, I think that, during the course of the semester, I read every paper that has been published (and one that hasn’t been) on parasites that attack developing embryos of Daphnia. This has been a lot of fun. First of all: how often can you say that you think you’ve read everything that’s been written on a topic you are studying?* Second, it’s felt like a classic, old school literature hunt, and that’s been a lot of fun.

Since I was a grad student, I’ve seen Daphnia infected with a parasite that attacks the developing embryos. As a grad student, I initially would record it as “scrambled eggs” in my lab notebook, since I tried to use names that were evocative. (This also led to parasites named “scarlet” and “Spiderman”.) Over the years, I started simply referring to it as “the brood parasite”. It was something I was interested in doing more on, but I didn’t have the time and knew I would need to collaborate with a mycologist to do the work well.

Fast forward approximately 10 years to when I arrived at Michigan. Here, I’m fortunate to have a fantastic mycologist colleague, Tim James, who was game for helping me figure out what the parasite is. We recruited a first year undergraduate, Alan Longworth, to help us work on the project. In the end, the parasite has proved to be really interesting. We have our first manuscript on it in review right now.

One of the key things we wanted to do with the initial brood parasite project was figure out what the parasite was. Microscopy and molecular analyses indicated it was an oomycete, but not particularly closely related to anything that had been sequenced previously. We started thinking about what we might name it if we decided it was a novel species (twitter had some great suggestions focusing on mythological characters that killed babies!), but I also wanted to really dig into the literature.

The first two, most obvious sources to consult were Dieter Ebert’s excellent book on parasites of Daphnia, and a classic monograph by Green on the same topic. Dieter’s book has relatively little coverage of brood parasites, though does point out that they are common and highly virulent. The Green monograph mentioned a “fungal”** parasite, Blastulidium paedophthorum. To cut to the chase: all the evidence points to our brood parasite being Blastulidium paedophthorum. That’s a lot to keep typing (or saying!), and it’s too good to pass up on the opportunity to use “Bp” as the abbreviation, as that works for both the scientific name (Blastulidium paedophthorum) and the common name we’d been calling it (brood parasite). So, we’ve declared the parasite Bp.

Backing up again, the description of Bp in Green seemed like a good fit to what we were seeing, so I wanted to read everything I could about the parasite.*** This started me down a path of reading some really old papers, nearly all of which were in foreign languages. Bp was first described by Pérez in 1903, with a follow up paper in 1905. I was kind of blown away that I could easily download these from my dining room! Chatton had a paper on Bp in 1908 (also available from my dining room table!) After that, it was featured by Jírovec in his wonderfully titled 1955 paper. (The title translates to “Parasites of Our Cladocera”. I love the possessive “our”! 🙂 ). And then, crucially, it was the focus of ultrastructure work by Manier, reported in a paper in 1976.

All of the papers in the preceeding paragraph were important to figuring out whether we were working with the same parasite. None of them are in English. That added to the fun “I’m going on an old school literature hunt” feel, but also made it more challenging to read them.**** Reading them involved a combination of trying to remember my high school French, lots of time with Google translate, and, ultimately, seeking out translators. It was relatively easy to find translators for the French papers, thanks to a few people being really generous with their time. The Czech one, by Jírovec, took substantially longer to find a translator for, but a Czech Daphnia colleague, Adam Petrusek, was kind enough to put me in touch with someone who did a great job on the translation.

All semester, I’ve been thinking about how much fun this has been. Indeed, it’s part of why I really want to figure out how to set aside time to read more! But it especially came to mind after reading this recent ESA Bulletin piece by David Inouye on the value of older non-English literature. In that, Inouye talks about his own journeys through the older non-English literature, and concludes with this paragraph:

So my paper trail extends back to some of these early natural historians in Austria and Germany. Their work helped give me a much longer historical perspective than I would have had if I’d relied just on the English literature on ant–plant mutualisms, primarily from the 1960s on. Although as a graduate student I was able to track down the original publications from the 1880s in libraries, I see that some of this literature is now freely available on Web resources such as ReadAnyBook.com, the Biodiversity Heritage Library, or old scientific literature scanned by Google Books. And the translation from Google Translate I just tried with some of von Wettstein’s 1888 papers is certainly sufficient to follow most of the content. So perhaps the only barrier to familiarity with older non-English literature for ecologists now is the time required to find it. Time that might be well spent to broaden your perspective and make sure you’re not re-discovering insights from early natural historians.

I completely agree that the longer historical perspective – especially that provided by the non-English literature – has been essential. If not for those papers, we would think that this parasite hadn’t been described before and was in need of a name. And I clearly agree with the second-to-last sentence, which is very much in line with my post from last week (which I wrote before reading Inouye’s piece). So, here’s hoping we all find the time to really dig into the literature, and that, while doing so, we remember that there’s lots of value in digging into the classic, non-English literature.

 

* Okay, fine, it’s not like there are tons of papers on the topic. But it’s still fun to think I’ve read all of them.

** The parasite is an oomycete, and oomycetes are not fungi. But that wasn’t recognized in the early 1970s when Green published his monograph.

*** The references for this paragraph are: Pérez 1903, 1905, Chatton 1908, Jírovec 1955, Manier 1976; full references are given below.

**** I would absolutely love to be multilingual. Sadly, I am not.

 

References

Chatton, E. 1908. Sur la reproduction et les affinités du Blastulidium paedophtorum Ch. Pérez. Comptes Rendus Des Seances De La Societe De Biologie Et De Ses Filiales 64:34-36.

Jírovec, O. 1955. Cizopasníci našich perlooček II. Československá Parasitologie II 2:95-98.

Manier, J.-F. 1976. Cycle et ultrastructure de Blastulidium poedophthorum Pérez 1903 (Phycomycète Lagénidiale) parasite des oeufs de Simocephalus vetulus (Mull.) Schoedler (Crustacé, Cladocère). Protistologica 12:225-238.

Pérez, C. 1903. Sur un organisme nouveau, Blastulidium paedophthorum, parasite des embryons de Daphnies. Comptes Rendus Des Seances De La Societe De Biologie Et De Ses Filiales 55:715-716.

Pérez, C. 1905. Nouvelles observations sur le Blastulidium paedophthorum. Comptes Rendus Des Seances De La Societe De Biologie Et De Ses Filiales 58:1027-1029.

Is it worse to admit a paper was rejected than to not acknowledge helpful anonymous reviews?

Thanks to being on research leave this semester, I am currently working on several manuscripts. Most of these are manuscripts that we are preparing to submit for the first time, but one is a manuscript that was previously reviewed and rejected.

It’s always a bit painful to receive a rejection, but my first thought when reading through the four(!) reviews this manuscript received was that they were really thoughtful and would really help the paper. As I worked last week on editing the manuscript, I was struck by that same thought again: these reviews are really helpful. Which made me think: should we acknowledge these anonymous reviewers?

I’ve benefitted in the past from manuscripts that were originally rejected by one journal and greatly improved by the review process, as I wrote about in my post on a paper that resulted from my dissertation, which was rejected by Ecology and then published in American Naturalist. But, looking back at the acknowledgments section of that paper, it doesn’t acknowledge the contributions of the reviewers and editor from Ecology (nor, to my great embarrassment as I realize it now, those of Yannis Michalakis, the AmNat AE who was really helpful during the review process).

Are there reasons why I might not want to acknowledge those earlier reviewers? The main reason would seem to be concern about biasing the editor or reviewers at the next journal, if having them know that a paper was rejected from another journal will make it seem subpar. Does that happen? I have no idea. The optimist in me (who may be a Pollyanna) says that we all recognize that papers get rejected for lots of reasons. The realist in me says that everyone has biases (even if not everyone is aware of them), and that we don’t want to make our publishing lives any harder than they need to be.

Thinking about this from the perspective of a reviewer, I can’t recall ever seeing a manuscript acknowledge anonymous reviewers in the first submission I saw. I also have never been annoyed when, in reviewing a manuscript again for a second journal, the authors don’t acknowledge that it was submitted elsewhere first. Then again, I don’t get annoyed even if they don’t acknowledge anonymous reviewers in the published version.*

Rejection is a part of science. The main thing we can hope for is that the rejections are fair and provide helpful feedback. It’s unfortunate that the culture seems to be set up in a way that makes it unlikely for people to acknowledge them when they do. Right now, I’m not sure if I want to buck that trend.

 

*I’m especially unlikely to get annoyed because I’ve forgotten to add this line in myself, even when I’ve been truly grateful for the suggestions of reviewers. Others feel differently, though.

In praise of slow science

Its a rush rush world out there. We expect to be able to talk (or text) anybody anytime anywhere. When we order something from half a continent away we expect it on our doorstep in a day or two. We’re even walking faster than we used to.

Science is no exception. The number of papers being published is still growing exponentially  at a rate of over 5% per year (i.e. doubling every 10 years or so). Statistics on growth in number of scientists are harder to come by – the last good analysis I can find is a book by Derek de Solla Price in 1963 (summarized here) – but it appears the doubling time of scientists, while also fast, is a bit longer than for the doubling time of the number of papers. This means the individual rate of publication (papers/year) is going up. Students these days are being pressured to have papers out as early as their second year*. Before anxiety sets in, it should be noted that very few students meet this expectation and it is probably more of a tactic to ensure publications are coming out in year 4 or so. But even that is a speed up from publishing a thesis in year 6 or so and then whipping them into shape for publication which seemed to be the norm when I was in grad school. I’ve already talked about the growing number of grant submissions.

Some of this is modern life. Some of this a fact of life of being in a competitive field (and there are almost no well paying, intellectually stimulating jobs that aren’t highly competitive).

But I fear we’re losing something. My best science has often been torturous with seemingly as many steps back as forward. My first take on what my results mean are often wrong and much less profound than my 3rd or 4th iteration. The first listed hypothesis of my NSF postdoc proposal turned out to be false (tested in 2003-2004). I think I’ve finally figured out what is going on 10 years later. My first two papers did not come out until the last year of my PhD (thankfully I did not have an adviser who believed in hurry up science). But both of them had been churning around for several years. In both cases I felt like my understanding and my message greatly improved with the extra time. The first of these evolved from a quick and dirty test of neutral theory to some very heavy thinking about what it means to do models and test theory in ecology. This caused the second paper (co-authored with Cathy Collins) to evolve from a single prediction to a many prediction paper. It also lead to a paper in its own right. And influenced my thinking to this day. And in a slightly different vein since it was an opinion paper, my most highly cited paper was the result of more than 6 months of intense (polite but literally 100s of emails) back and forth debate among the four authors that I have no doubt resulted in a much better paper.

I don’t think I’m alone in appreciating slow science. There is even a “slow science” manifesto although it doesn’t seem to have taken off. I won’t share the stories of colleagues without permission, but I have heard plenty of stories of a result that took 2-3 years to make sense of. And I’ve always admired the people who took that time and in my opinion they’ve almost always gotten much more important papers out of it. I don’t think its a coincidence that Ecological Monographs is cited more frequently than Ecology – the Ecological Monographs are often magnum opus type studies that come together over years. Darwin spent 20 years polishing and refining On the Origin of Species. Likewise, Newton developed and refined the ideas and presentation behind Principia for over a decade after the core insight came.

Hubbell’s highly influential neutral theory was first broached in 1986 but he then worked on the details in private for a decade and a half before publishing his 2001 book. Would his book have had such high impact if he hadn’t ruminated, explored, followed dead ends, followed unexpected avenues that panned out, combined math with data and literature and ecological intuition and generally done a thorough job? I highly doubt it.

I want to be clear that this argument for “slow science” is not a cover for procrastination nor the fear of writing or the fear of releasing one’s ideas into print (although I confess the latter influenced some of the delay in one of my first papers and probably had a role with Darwin too). Publication IS the sine qua non of scientific communication – its just a question of when something is ready to write-up. There are plenty (a majority) of times I collect data and run an analysis and I’m done. Its obvious what it means. Time to write it up! So not all science is or should be slow science. Nor is this really the same as the fact that sometimes challenges and delays happen along the way in executing the data collection (as Meg talked about yesterday).

But there are those other times, after the data is already collected, where there is this nagging sense that I’m on to something big but haven’t figured it out yet. Usually this is because I’ve gotten an unexpected result and there is an intuition that its not just noise or a bad experiment or a bad idea but a deeper signal of something important. Often there is a pattern in the data – just not what I expected. In the case of the aforementioned paper I’ve been working on for a decade, I got a negative correlation when I (and everybody else) expected a positive correlation (and the negative correlation was very consistent and indubitably statistically and biologically different from zero). Those are the times to slow down. And the goal is not procrastination nor fear. It is a recognition that truly big ideas are creative, and creative processes don’t run on schedules. They’re the classic examples of solutions that pop into your head while you’re taking a walk not even thinking about the problem. They’re also the answers that come when you try your 34th different analysis of the data. These can’t be scheduled. And these require slow science.

Of course one has to be career-conscious even when practicing slow science. My main recipe for that is to have lots of projects in the pipeline. When something needs slowing down, then you can put it on the back burner and spend time on something else. That way you’re still productive. You’re actually more productive because while you’re working on that simpler paper, your subconscious mind is turning away on the complicated slow one too.

What is your experience? Do you have a slow science story? Do you feel it took your work from average to great? Is there still room for slow science in this rush-rush world? or is this just a cop-out from publishing?


*I’m talking about the PhD schedule here. Obviously the Masters is a different schedule but the same general principle applies.

Science is hard: culturing problems edition

Science is hard. That’s not exactly a newsflash to any of the readers of this blog, but it’s a point that Science has been reminding me of recently. This has been reminding me of an earlier Science Is Hard episode that my lab went through. I’ve been reminding myself that we got through that one (and that one was definitely worse), and we’ll get through this one, too.

Both of these Science Is Hard episodes have involved culturing problems, and, in both cases, I feel like we did really good science to figure out the cause of the problem. But it’s the sort of science that goes completely unreported. In many ways, it fits as the story behind the paper – really, for a whole number of papers, because none of them would have existed if we hadn’t figured out the problem.

The first major culture problem occurred when I was at Georgia Tech, after I’d been there for about a year. For the first semester or so, I was mainly ordering things and setting up the lab. And then, in my second semester, we started doing research. There were definitely stumbling blocks (it took us a long time to get our algae chemostats really going, for example), but things were moving along.

Until they weren’t. At some point, we started having lots of problems with animals dying. It was so frustrating, because we had felt like we were about to be going full steam, only to find ourselves unable to do any experiments. So, of course, we started trying to figure out why.

My grad student Rachel was in her first year in the lab, and had recently started doing some experiments. She was really worried that maybe she was doing something wrong in the lab that was causing the problems. And, frankly, the timing was a little suspicious, as the problems had started right around when she really started to do work in the lab. So, she and I tried to set up an experiment side-by-side, doing everything at the same time. We both had tons of animals die. That ruled out the Rachel hypothesis (which was a relief to Rachel and to me!), but didn’t tell get us much further to figuring out what was going on.

Rachel ended up having the key insight that got us moving on the right track: she was the one who first noticed that the deaths were a beaker-level phenomenon. Either all of the animals died in a beaker or none died. Based on that observation, we put some beakers in the acid bath, rinsed them well with DI water, and then set up animals in them. None died. Breakthrough!

So, it was something on the beakers. But what? And when was it getting on there? To get at that, we first did an experiment where we acid washed a bunch of beakers, and then rinsed half with DI water and put the other half through our normal dishwashing process (which involves scrubbing with soapy water, then rinsing with tap water, then putting them in the dishwasher for a tap rinse followed by DI rinses. We are serious about getting our beakers clean.) The animals in the beakers that had only been rinsed with DI water all lived. The ones that had gone through the regular dishwashing process died. More progress!

So, then we needed to figure out which part of the dishwashing process was a problem. We had the people who ran our water system come and put in a way for us to draw from the DI tanks that fed the dishwasher in between the tanks and the dishwasher. We then used the DI water from that new feed to rinse the dishes (after washing them with soapy water and rinsing with tap water), and compared those to beakers that were rinsed on a DI cycle in the dishwasher. Again, the animals in beakers we’d rinsed by hand did great; the ones in beakers that had gone in the dishwasher died.

By this point we’d been troubleshooting for months, but we had at least made lots of progress. At this point, Al Dove from the Georgia Aquarium heard about our problems and very kindly offered to run some water samples for us. We put a beaker in the dishwasher upright to collect water and sent it over. The copper in the water was 74 ug/L. As my colleague Terry Snell pointed out, the LC50 for copper for the rotifer Brachionus is 30 ug/L.

At that point, I was so ready to buy a new dishwasher and put the problem behind us! I had been using a dishwasher purchased at Lowe’s, because that’s what the lab I’d been in as a grad student had done, and there weren’t any problems. But I decided that, given that the problem was with the dishwasher, I needed to get a fancy lab-grade dishwasher. So, I did. When the new dishwasher came and they took the old one out to replace it, they found that the DI line had been attached with a copper fitting. This is a huge no-no, since DI water is very pure and leaches the copper from the pipe into the water. But, it explained why we had so much copper in the water!

At that point, our problem was identified, but not solved. We have a LOT of glassware in my lab (thousands of beakers), and we had no way of knowing which had gone through the dishwasher while the copper contamination was occurring. So, we concluded that we had to acid wash every piece of glassware in the lab. That took weeks of work, mostly done by my excellent technician, Jessie.

In the end, we lost about a semester of work due to the copper problem. We haven’t figured out the source of our current problem yet. One thing that I find interesting is that, when I run into a problem like this, the first person I contact is my PhD advisor, Alan Tessier. I’d like to think I’m a grown up scientist now, but I still really value advice from Alan!

For now, we’re going through all the trouble-shooting. Acid washing beakers didn’t help, nor did using brand new beakers. So, it doesn’t seem to be a glassware problem. Now we’re on to testing whether it’s an issue with the water. One possibility is that something about the water has changed as we stored it over the winter. (We culture our Daphnia in filtered lake water.) Perhaps some compound the Daphnia really like has broken down over the winter. So, we’ll go out and get new water and see if that solves the problem. I was dragging my heels on going out and breaking through the ice, since it seems like a major pain, but my lab is really excited about our upcoming winter limnology expedition. And we’re all really excited about the prospect of getting this problem solved soon!

We’ll get through it. But, boy, science is hard.

 

Related posts:
1. System envy and experiment failures
2. Tractable != easy

Unusual uses of technology for ecological studies

Last year, I attended the defense talk of Jasmine Crumsey, who is now a postdoctoral researcher at Cornell University. Her PhD dissertation focused on the impacts of exotic earthworms on soil carbon dynamics. Her work is notable because of what she found (exotic earthworms alter carbon storage, but the exact effect differs among species depending on their burrowing pattern), but also because of how she found it: she worked with radiologists from the University of Michigan medical school to reconstruct and quantify earthworm burrow systems. How cool is that?*

XrayCT_UMradiology2
Legend: The Xray CT with one of Jasmine’s mesocosms in it, rather than a human. (Photo credit: Jasmine Crumsey)

This made me think of my post on unusual suppliers of research equipment (the winner being the use of vibrators by pollination biologists), but it’s on a whole different scale in terms of cost and technology!

Do you know of other examples of people using very fancy, very expensive equipment for an “off-label” use in ecological research? Have you done this for your own research?

 

*Jasmine was not the first person to use this approach – see her Ecology paper for references to others who did this before her. This was just the first time I’d heard of this.

**You can also learn more about Jasmine’s research and see video of her electroshocking to get worms out of their burrows by watching this video, starting around 9:20.

Do individual ecologists review in proportion to how much they submit? Here’s the data! (UPDATEDx4)

One oft-voiced concern about the peer review system is that it’s getting harder to find reviewers these days (e.g., this editorial from the EiCs of various leading ecology journals). Which isn’t surprising, given that academics have strong incentives to submit papers, but much weaker incentives to review them.

A few years ago, Owen Petchey and I proposed a reform known as PubCreds, the purpose of which was to oblige authors to review in appropriate proportion to how much they submit. For instance, if each paper you submit receives two reviews, then arguably you ought to perform two reviews for every paper you submit.

Owen and I pitched PubCreds to various individuals and groups who might’ve had some power to make PubCreds happen, and we didn’t really get much traction. One reason among many we didn’t get much traction is that people questioned the need for PubCreds. Heck, even some of the authors of the editorial linked to above questioned the need for PubCreds! This was somewhat frustrating to Owen and I, but in retrospect I can understand it. The fact was, we didn’t have much hard data demonstrating the breakdown of the existing peer review system, at least not a breakdown so serious as to make major reform a matter of urgency.

So Owen and I decided to go get some data. The online ms handling systems that most journals have used for years compile data on how often individuals submit, how often they’re invited to review, and how often they agree to review. So we approached the EiCs and managing editors of something like 30 ecology journals, asking if they’d be willing to share (anonymized) data. The only journals from which we received a positive response that was then followed up were Population Ecology (which didn’t have enough data to be useful), and the journals of British Ecological Society (BES), of which Lindsay Haddon was managing editor at the time. Thanks to Lindsay’s hard work extracting the relevant data from Manuscript Central, we were able to compile an anonymized dataset on how often individuals submitted to, reviewed for, and were invited to review for, the four BES journals from 2003-2010. Our paper analyzing the data has just been published (Petchey et al. 2014; open access).

Here are the headline results (read the paper for details; it’s short).

  • Our main question of interest was whether individuals’ reviewing and submission activities were in balance. In our dataset, “in balance” meant “doing 2.1 reviews for every paper you submitted”, since the average paper in the dataset received 2.1 reviews (UPDATE: defining “in balance” relative to the mean number of reviews per paper corrects for rejection without review. If you were to do a similar analysis for, say, Science and Nature, you’d presumably find that the average paper receives much less than 2.1 reviews, because many papers are rejected without review UPDATE #2: In the comments, Owen jogs my memory, reminding me that we included in the analyses only submissions sent out for review.) For 64% of individuals in our dataset, the answer is “no”–they either did at least twice as many reviews as needed to balance their submissions, or less than half as many. So the majority of individuals are either “overcontributors” or “undercontributors” to the peer review system.
  • The relative abundance of over- vs. undercontributors depends on the assumptions you make about how to distribute “responsibility” for multi-authored papers (e.g., if you’re corresponding author on a multi-authored paper, does that mean you personally should do 2.1 reviews to balance that submission?) Depending on assumptions, 12-44% of individuals did at least twice as many reviews as needed to balance their submissions, while 20-52% did less than half as many.
  • Undercontributors mostly didn’t agree to do all the reviews they were invited to do. So undercontributors mostly didn’t undercontribute due to lack of opportunity to review, at least not completely.
  • Researchers who submitted more were more likely to accept invitations to review.

Obviously, few ecologists submit to, and review for, only the BES journals. But there’s no reason to think that this biases our results, as far as we can see. So we’re reasonably confident that our results wouldn’t change if someone were somehow able to compile a larger dataset from more journals. (UPDATE: I’m sure some people who are undercontributors to BES journals would be in balance, or even overcontributors, if you accounted for their reviewing and submitting for other journals. But I’m sure some people who are overcontributors to BES journals would be in balance or even undercontributors if you accounted for their reviewing and submitting to other journals. And I’m sure some people who are in balance in our dataset would not be in balance if you had data from many more journals. So when I say our results are unbiased as far as we can tell, what I mean is that Owen and I can’t see any reason why ecologists would tend to overcontribute to BES journals compared to ecology journals as a whole. Or tend to undercontribute to BES journals compared to ecology journals as a whole. Or tend to make more balanced contributions to BES journals than to ecology journals as a whole. But if you can see a reason why BES journals might represent a biased sample, please say so, I really do want to hear people’s thoughts on this!)

(UPDATE #4: In the comments, Douglas Sheil suggests a potential source of bias in our estimate of the proportion of people who are in balance. Briefly, the fact that we’re working with a sample of journals rather than a census of all journals might cause our data to underestimate the proportion of ecologists who are in balance, even if our data are a random sample (i.e. people react towards BES review requests the same way they react towards non-BES review requests). I just did a quick and dirty simulation to check out this suggestion and it looks like there might be something to it. Hard to say much more than that without a much more thorough simulation study. And even then it might not be possible to say more, since I’m sure that the existence and strength of any bias will be sensitive to the assumptions one makes, and many of those assumptions can’t be checked with the data we have. I doubt I’ll be able to make time to really look into this thoroughly, but if somebody wanted to pick up this idea and run with it, I could try to pitch in…)

Overall, I was pleasantly surprised by the results. I was too cynical–I thought we’d find a very large proportion of people reviewing very little relative to how often they submit, balanced by a small proportion reviewing a lot relative to how often they submit. In fact, overcontributors aren’t that rare relative to undercontributors, and might even be more common than undercontributors. I was also pleasantly surprised that, on average, people who submit more are more likely to accept invitations to review. So score another victory for data over anecdotal impressions.

But having said that, undercontributors aren’t rare in an absolute sense, and so the only thing that keeps the system from breaking down is that the undercontributors are balanced by a sufficient number of overcontributors. Obviously, our data don’t give us any basis for predicting whether or how the relative abundance of over- and undercontributors might change in future.

UPDATE #3: Let me emphasize a point made in the paper, which I should’ve emphasized more in the post. We have no information on why individuals over- or under-contribute. There could be many reasons, some better than others. For instance, some undercontributors may serve as editors, and so decline requests to review because they contribute to the peer review system via their work as editors. Whatever the reasons for individuals over- or undercontributing, it’s of practical interest to know how common such individuals are.

Owen and I haven’t done much with PubCreds for a while now, but I still find these data interesting in their own right, and hope others will as well. The dataset is on Dryad for anyone who wants to explore it.

Apologies for the self-promotional post, I don’t ordinarily post on my own papers. I’m only doing it because it’s related to a topic I’ve blogged about in the past, and because I’d be very interested to hear what folks think of our results. Looking forward to your comments.