We’ll have a more serious post on the March For Science next week, but in the meantime here’s a compilation of some of the best signs, where “best” is operationally defined as “signs I really liked”. Whether because they pithily summarized what I think are good messages for the March to send, or just because they were funny. Share your favorites in the comments!
Note from Jeremy: this is a guest post by Greg Crowther.
We academics sure love to discuss authorship, don’t we? Previous posts on this blog have addressed authorship issues such as author order and criteria for authorship. The latter post dove deeply into the issue of defining what sorts of contributions are substantial enough to merit authorship. I thought this post and the corresponding comments were great . . . but too focused on one side of authorship at the expense of the other side.
Before I explain what I mean by that, consider the following mini-case studies:
I returned this weekend from the IBS 2017 meeting in Tucson. It was a great meeting. The organizers moved it on fairly short notice from Brazil to Tucson due to concerns about Zika. This resulted in a lot of extra work for the organizers, but it didn’t show. It was a well-run meeting. And it was my favorite type of a meeting a few hundred people organized around a fairly specific topic.
I’m not going to repeat individual talks – check out the twitter feed for many great talks (#ibstucson). As is usual with me, such meetings inspire big-picture musings. This one probably more than most, since the last time I was able to attend IBS was the inaugural meeting in Mesquite Nevada in 2003. I noticed a lot of differences in the 14 year gap.
Public service announcement: I’m on leave until July 1. I’m working on a book. I’m not doing any reviews during this time. I’m announcing this here in the hopes that it’s an efficient way to alert lots of editors. I don’t like having to reply individually to every review request I get if I’m just going to decline them all. And it’s a pain to have to log into every journal’s editorial manager system and change my availability status.
Posts will continue as usual-ish, because I’ll be trying to use the blog to help me write the book. But I might be posting a bit less.
UPDATE: Since this came up in the comments, I should note that no, I’m not shirking my obligations to the peer review system by taking a 6-month break from reviewing. As I’ve written in the past, I believe each of us has an obligation to do at least as many reviews as we receive (unless you can’t do so due to lack of sufficient invitations to review). Since starting my postdoc, I’ve always done more than 2 reviews for every ms I submit or co-author (counting rejected and resubmitted mss as new ones, obviously), and in any given calendar year the ratio is usually more like 3:1 or 4:1. I’m going to return to that practice after my sabbatical. So I think it’s fine for me to submit a few papers in the next 6 months without doing any reviews, because on a longer-term basis I’m fulfilling my professional obligations to the “peer review commons”.
The deadline for nominations for the Jasper Loftus-Hills Young Investigator Awards is Jan. 1. Details here.
A rare retraction in ecology, from Biology Letters. There’s no suggestion of misconduct, merely honest errors the last author worked hard to fix. A second paper, in GEB, also is affected by the errors, but GEB will allow the authors to publish a corrected version. Our own Brian McGill, EiC of GEB, is quoted in the linked article. Mistakes happen in science, and discovering you’ve made one is really stressful, so kudos to the author for doing the right thing and correcting the record.
Phil Davis of Scholarly Kitchen with an overview of different approaches to “portable peer review”. Axios Review, for which I am an editor, gets a lengthy shout-out. Here‘s my most recent post on Axios and why you should consider trying it.
Thanks for reading everyone! Even the many of you who found us via searches and didn’t find what you were looking for. 🙂
Jeremy had a post on Monday musing on a propensity for researchers that start out doing basic research and end up mixing applied research in later in their careers. I think the core observation is, on average of course, not by individual, correct. And there were a lot of spirited explanations of why this is in the comments. His framing of a single trade-off dimension between basic and applied is extremely common, and embedded in the funding of many nations’s scientific agencies (e.g. in the US, NSF only funds basic research while the US Department of Agriculture funds applied research).
But I’ve always found that trade-off limiting. Among other things, it implies something cannot be both basic and applied, something which I reject (and Don S gave a pretty spirited rebuttal of in the comments as well). I have found the notion of two trade-off axes put forth by Donald Stokes, in his book Pasteur’s Quadrant: Basic Science and Technological Innovation to be a more useful framing (also see a decent summary of the book in Wikipedia).
Also this week: why “crunch mode” doesn’t work, the difficult question of “fair” pay for postdocs, rethinking
economics science, a high profile ecology paper comes into question, are scientists becoming less productive, confirmation bias > you, is torture ok if you do it to ggplot, WHEN WILL I HEAR FROM NSF?!?! and more. Lots of good stuff this week!
In a rare foray onto Twitter the other day, I made a silly joke about “the field”, as if there was literally just one field where ecologists collect data:
Which prompted Meg to note that the place she does her field work isn’t a field at all (or if it is, it’s always flooded):
Which got me wondering: what is the origin of the term “field work”? How did a term that originally meant (roughly) “farm labor” come to mean “practical research conducted in any natural environment, as opposed to a lab or office”: