It’s been widely suggested that one solution to the increasing difficulty of obtaining peer reviews is sharing of reviews among journals. If a ms is rejected by one journal, the ms (appropriately revised if necessary) and the reviews can be forwarded to another journal, which can make a decision without the need for further reviews. That’s the idea behind peer review cascades, such as how many Wiley EEB journals will offer to forward rejected mss and the associated reviews to Ecology & Evolution. It was also the idea behind the (late, lamented) independent editorial board Axios Review.
And it’s the idea behind a practice some folks were talking about on Twitter a little while back: authors themselves forwarding the reviews their rejected ms received to a new journal along with the revised ms.
Below the fold: a poll asking if you’ve ever done this, and then some comments from Meghan, Brian, and I. Answer the poll before you read the comments.
Who pays the publication fee for your papers, when there is one?
When the authors are all members of the same lab, I assume the PI ordinarily pays the fee if there is one. That’s certainly what I do.
Just recently I published an author-pays open access paper with a grad student whom I co-supervised with a colleague, and there’s a second such paper in the works. I had been hoping to split the publication fees with my colleague. But it may come down to whoever has the most grant money.
What about papers by working groups or other big collaborations? Who pays the publication fee then? Does whatever funding source paid for the working group also pay the publication fee? Or does some working group member pay the fee from one of their grants, or from some other source available to them such as an institutional open access fund? What if more than one person in the working group has the ability to pay? In that case I guess the first author, or the first author’s PI, would pay?
Same questions for the data hosting fees charged by some depositories, when depositing data associated with a publication.
ht to a correspondent for suggesting this post idea.
Regression through the origin is when you force the intercept of a regression model to equal zero. It’s also known as fitting a model without an intercept (e.g., the intercept-free linear model y=bx is equivalent to the model y=a+bx with a=0).
Every time I’ve seen a regression through the origin, the authors have justified it by saying that they know the true intercept has to be zero, or that allowing a non-zero intercept leads to a nonsensical estimated intercept. For instance, Vellend et al. (2017) say that when regressing change in local species richness vs. the time over which the change occurred, the regression should be forced through the origin because it’s impossible for species richness to change if no time passes. As another example, Caley & Schluter (1997) did linear and nonlinear regressions of local species richness on the richness of the regions in which the localities were embedded. They forced the regressions through the origin because by definition regions have at least as many species as any locality within them, so a species-free region can only contain species-free localities.
Which is wrong, in my view. Ok, choosing to fit a no-intercept model isn’t always a big deal (and in particular I don’t think it’s a big deal in either of the papers mentioned in the previous paragraph). But sometimes it is, and it’s wrong. Merely knowing that the true regression has to pass through the origin is not a good reason to force your estimated regression to do so.
A few months ago, Stephen Heard wrote a blog post that prompted us to have a brief twitter discussion on whether we sign our reviews. Steve tends to sign his reviews, and I tend not to, but neither of us felt completely sure that our approach was the right one. So, we decided that it would be fun for us to both write posts about our views on signing (or not signing) reviews. In the interim, I accepted a review request where I decided, before opening the paper, that I would sign the review to see whether that changed how I did the review. So, in this post I will discuss why I have generally not signed my name to reviews, how it felt to do a review where I signed my name, and what I plan on doing in the future.
So you’ve just been offered your first* tenure-track faculty position–congratulations! Perhaps you even have multiple offers–multiple congratulations! As a brand new faculty member, you now have to do the first of many things you’ve probably never been trained to do: negotiate salary, startup, and possibly other things such as start date or teaching duties. Here’s some advice from Meg, Brian, and I.
It’s aimed at ecologists, but some of it may generalize to other fields. And it’s based primarily on our experiences and knowledge about R1 and R2 universities or their approximate equivalents in the US and Canada, but some of it may generalize to other sorts of institutions and countries. In offering this advice, we’re just sticking with what we know. We encourage commenters to chime in with their own advice, including advice applicable to other contexts.
Note from Jeremy: this post is by Meg and originally ran in 2015 under the title “I have data, ESA, I promise!” I’m re-upping it because it’s timely.
A few years ago, I asked a senior colleague for feedback on something I’d written. He agreed, and a couple of days later, sent an email saying “Is there a good time to discuss this?” I immediately thought it must mean he’d really hated what I’d written. I replied, suggesting a few times in the next couple of days. In his reply, he choose the latest of those times, saying he needed more time to mull it over. That confirmed my worst fears – it was so bad he needed extra time to figure out how to tell me how bad it was! After spending some time getting no other work done because I was so distracted, I decided to write to say that, based on his emails, I was worried that there was a major problem with what I’d written. He replied immediately saying not to worry, that it read very well, and that he just had a few ideas that he thought would be easier to discuss in person.
I was thinking of this situation again recently when I was emailing a student in my lab. She’d emailed about a proposal she’s working on, laying out two different options for a fellowship proposal she’s working on. My thinking, when reading the ideas, was that both of them could work, but that there might also be other options, and that it would probably be best to discuss all the options in person. Looking at my schedule and comparing with hers, I could see that we wouldn’t be able to meet until the end of the week. So, I initially wrote a reply that said, “Can we meet Friday at 11 to chat about this?” In the brief pause before hitting send, I realized that, if I were in her shoes, I would spend the rest of the week trying to interpret what that email had meant, most likely assuming it meant something bad. I then realized that could be easily addressed by instead saying something like, “Both of these ideas look good to me, but there might be other options worth considering, too. Are you free to meet Friday at 11 to discuss the options more?”
After writing about being a scientist who deals with anxiety, one question I’ve been asked repeatedly is what faculty can do to make their labs friendlier to students with mental health issues. I’m generally unsure of how to respond to this – so much depends on each particular situation. But avoiding unnecessary vagueness in emails is one pretty straightforward, simple thing that people can do to make academia friendlier to everyone, but perhaps especially to those with underlying anxiety issues.
A couple of nights ago, I checked the weather forecast for the next day, in part to see how cold it would be for my morning run. I was surprised to see that the forecast was for 3-6 inches of snow overnight. (I hadn’t realized a storm was coming!) I had no interest in trying to slog through a run in 3-6 inches of wet, unshoveled snow in the dark, so decided I would work when I first got up in the morning (in that wonderfully quiet time when I’m the only one in the house who is awake) and go to the gym at the end of my work day. And that’s what I did. I got up, made myself some tea, sat down to check twitter, and then started working, which included replying to some emails that had been hanging around in my inbox.
That was when I remembered a conversation I’d recently had about whether it’s okay to send work emails outside of “typical” work hours. This is a topic that comes up on twitter sometimes, too, as well as on facebook. The concern is that, if you’re sending emails early in the day or in the evening or on weekends: 1) you have an unhealthy work/life balance and/or 2) you are sending a message to others that they should be working at those times, too. I fully, completely support having interests outside of work, and think that working long hours is unhealthy and unproductive. But I don’t think the way to achieve healthy work habits is to be proscriptive about when people work, or to shame others for working outside the hours that we deem acceptable.
Over the years, I’ve heard people talk about mentoring plans and individual development plans (IDPs), and always thought they sounded like they could be worth trying some time. But I never made it a high priority, and so never actually got to doing them with my lab. I got as far as starting to do an IDP for myself to test it out, but never got further than that. Then, last year, I had to do a mentoring plan with one of my students, as a requirement of her graduate program. As soon as I did that one with her, I realized I needed to be doing these with everyone in my lab, including grad students, postdocs, technicians, and undergrads. Here, I’ll describe what we include in our mentoring plans, talk about some of the ways they’ve been helpful, and will ask for ideas on some things I’d like to add or change.
A while back I argued that we’ll never get rid of salesmanship in science, and wouldn’t want to. But there are more and less effective of “selling” your work (i.e. conveying to others why it’s interesting or important).
Here’s perhaps the worst way to “sell” your work: just asserting how great it is. This is totally ineffective. If your work is great, telling the reader that it’s great is superfluous. If your work isn’t great, telling the reader it is won’t convince the reader otherwise. As the old adage goes, show don’t tell. And don’t just take my word for it, take NSF’s (see tip #3).
Indeed, merely asserting how great your work is is actually worse than ineffective. It turns readers against you, and rightly so. It’s the reader’s place to decide if your work is great, not yours. So if you assert your work is great, it comes off as you trying to usurp the reader.
Fortunately, it’s easy to avoid simply asserting that your work is great. Never use any of the following words to describe your own work or proposed work: