A few months ago, Stephen Heard wrote a blog post that prompted us to have a brief twitter discussion on whether we sign our reviews. Steve tends to sign his reviews, and I tend not to, but neither of us felt completely sure that our approach was the right one. So, we decided that it would be fun for us to both write posts about our views on signing (or not signing) reviews. In the interim, I accepted a review request where I decided, before opening the paper, that I would sign the review to see whether that changed how I did the review. So, in this post I will discuss why I have generally not signed my name to reviews, how it felt to do a review where I signed my name, and what I plan on doing in the future.
I was very surprised by the results of Meg’s recent poll on what reviewers mean when they say that, yes, they’d be willing to review a revised version of an ms. 34% mean not merely that they’re willing to review a revised version, but that they want to see a revised version to make sure the authors have addressed their concerns. Like Meg, I had no idea that reviewers who feel that way are such a large minority!
Which got me thinking about the roles of reviewers and editors, and if my own view on their roles isn’t as universal as I had (naively?) assumed. So below is a one-question poll. Do you see reviewers as advisers to the editor? Or do you think editors should ordinarily defer to reviewers, so that all reviewers should be satisfied before a paper is accepted for publication?
I am currently attending a Festschrift this week for Michael Rosenzweig. Make no mistake, he is still actively doing science, but with 50+ years of scientific career, it seems like a good time to reflect on what an impressive career he has had. Just for full disclosure upfront, he was my PhD adviser, so I’m hardly the most unbiased reporter, but of course that gives me a close perspective.
Mike was awarded the Ecological Society of America’s Eminent Ecologist award in 2008 and he has well over 100 papers, many massively cited, and three books, so I imagine many are familiar with his published work, and it would take too much space to summarize it anyway. I want to offer several more reflective and in some cases more personal thoughts. Take them as a reflection of my respect and appreciation for Mike or my musings on the ingredients of a good scientific career as you wish.
Based on my interest in authorship practices in ecology, I decided to look at papers published in Ecology in each of the past seven decades to see how corresponding authorship changed over that time.* I looked at the first (or second**) issue of Ecology in 1956 and every ten years thereafter.
tl:dr version of the results: Not surprisingly, the number of authors increased over time. For corresponding authorship, I found that, in 1996 and earlier, the corresponding author was almost never indicated. Looking every 5 years from 2001-2016, the first author*** was usually the corresponding author, though expanding the analysis to include AmNat and Evolution**** suggests that some of the changes might be due to some of the more mundane aspects of publication.
Recently, my department has been discussing whether to (re)create a course for first year grad students that would be a “professors on parade” sort of course – that is, a course where a different faculty member leads the course each week. This proposal is in response to new grad students saying they’d like more opportunities to get to know faculty early in their grad careers. Depending on the format of the course, it could also help with another request from students: more training in basic academic skills (e.g., how to give a talk, how to make a poster, etc.)
One thing this discussion has left me wondering is how other departments do this, and how well it works in those departments.* So, today, I’m doing a survey related to how this works other places. I will follow up tomorrow with a post for my idea for a different twist on this sort of course – which I think is exciting but also perhaps doomed to fail. (edit: here’s the link to the follow up post)
Over the years, I’ve heard people talk about mentoring plans and individual development plans (IDPs), and always thought they sounded like they could be worth trying some time. But I never made it a high priority, and so never actually got to doing them with my lab. I got as far as starting to do an IDP for myself to test it out, but never got further than that. Then, last year, I had to do a mentoring plan with one of my students, as a requirement of her graduate program. As soon as I did that one with her, I realized I needed to be doing these with everyone in my lab, including grad students, postdocs, technicians, and undergrads. Here, I’ll describe what we include in our mentoring plans, talk about some of the ways they’ve been helpful, and will ask for ideas on some things I’d like to add or change.
Dan Bolnick just had a really important – and, yes, brave – post on finding an error in a published study of his that has led him to retract that study. (The retraction isn’t official yet.) In his post, he does a great job of explaining how the mistake happened (a coding error in R), how he found it (someone tried to recreate his analysis and was unsuccessful), what it means for the analysis (what he thought was a weak trend is actually a nonexistent trend), and what he learned from it (among others, that it’s important to own up to one’s failures, and there are risks in using custom code to analyze data).
This is a topic I’ve thought about a lot, largely because I had to correct a paper. It was the most stressful episode of my academic career. During that period, my anxiety was as high as it has ever been. A few people have suggested I should write a blog post about it in the past, but it still felt too raw – just thinking about it was enough to cause an anxiety surge. So, I was a little surprised when my first reaction to reading Dan’s post was that maybe now is the time to write about my similar experience. When Brian wrote a post last year on corrections and retractions in ecology (noting that mistakes will inevitably happen because science is done by humans and humans make mistakes), I still felt like I couldn’t write about it. But now I think I can. Dan and Brian are correct that it’s important to own up to our failures, even though it’s hard. Even though correcting the record is exactly how science is supposed to work (and I did corrected the paper as soon as I discovered the error), it still is something that is very hard for me to talk about.
If you didn’t know, in economics and political science, people are hired for faculty positions based in large part on their “job market paper”. As in, one paper, ordinarily from their Ph.D. work and often not even published yet. Number of publications matters relatively little (though apparently it matters more in political science than in economics). Economics even has a centralized repository of job market papers; that’s how much they matter.
I am curious to hear what you think of this, and whether you think this approach or something like it could be an improvement on current practices in ecology. Personally, I think current faculty hiring practices in ecology are mostly pretty reasonable (see also), and so don’t think this would be a net improvement on current practices in ecology. But I think it’s not so obviously a bad idea as to be uninteresting to think about. I find it useful to think about the practices of other fields and whether they’d transfer to ecology. It helps me look at standard practice in ecology with fresh eyes. A few thoughts to get the ball rolling:
Last year, when I wrote a post with advice on strategies (and reasons) for working more efficiently, the first strategy on my list was:
- Recognize what is “good enough”. As the saying goes, perfect is the enemy of good. And recognize that “good enough” will vary between different tasks. It’s okay if the email you are sending to your lab about lab meeting isn’t perfectly composed.
In this post, I want to go into that idea more, since I think it’s really important (and since it’s one I need to continually remind myself of!)
Preface: This post is a bit different than a typical post for me (or any of us here at DE!) It relates to an interesting bit of Daphnia biology that I find myself relating a lot when I talk to people more generally about my research. People seem to find it surprising and interesting, so I decided to write a post on it in the hopes that others find it interesting, too.
If I put a bunch of different Daphnia on a microscope in front of you, you’d probably think they all look pretty much the same.* As an example, when keying out the species I’ve done the most work on, Daphnia dentifera**, using the excellent online Haney et al. key, these are two of the first traits you need to focus on:
Those aren’t exactly traits that are overwhelmingly obvious, are they?
I think it is because of their morphological similarity that it is then very surprising to most people when they learn just how old the genus Daphnia is. It’s really old.