Friday links: revisiting old papers, life after lab closure, and more (UPDATED)

Also this week: how being a cyclist is like being a woman, scads of advice for navigating the tenure track, against rejection without review, and more. Oh, and the National Science Foundation has been reading our old posts. At least, I like to think so. 🙂

From Meg:

Read this great post by Andrew David Thaler at Southern Fried Science. It covers several stories from the past week that relate to women in science, and has the blunt title “These things are related”. As he summarizes,

So, to reiterate, in the last week, we’ve been asked to ignore the profoundly misogynistic behavior of one long-departed scientist because his contributions to the field are too important; a graduate student is suing her former university for what appears to be systematic sexual harassment by her superiors; 1 in 5 researchers in the field report being victims of sexual assault; and one of the leading scientific journals thinks is perfectly appropriate to feature a dehumanizing image of sex workers on their cover.

(Note: I think the Clancy et al study on rape and harassment at field sites is so important that I wrote a post about it, rather than just linking to it here.)

Your amazing natural history video of the week: a great blue heron catching and eating a gopher. (ht Jessica Light)

proflikesubstance hosted a pre-tenure blog carnival, and has aggregated the posts here. It includes my post on navigating the tenure track.

I just ordered this book, Girls Who Looked Under Rocks: The Lives of Six Pioneering Naturalists, even though my daughter is younger than the intended audience. It looks great! (ht: @bug_gwen)

How being a cyclist is a lot like being a woman. (ht: Tracy Teal)

Joan Herbers had a melancholy post on life after lab closure. She gave up her lab researching ants to focus on work related to gender issues in STEM. This part was particularly painful:

As I transitioned to my new career, piles of old notebooks were relegated to the recycle bin. Some contained data collected to tackle questions that have been answered or are no longer interesting. Many more data were still useful and could fortify other lines of inquiry. Even so, nobody wanted my notebooks and I needed the space. So out went all those numbers, gels, charts, computer printouts, and methodology notes.

I’ve written about having data go unpublished for lack of time before, but, still, the thought of throwing away raw data is so sad. (I’m not saying I don’t understand her decision, just that it’s sad to have to do that.)

Here’s a post on how to mitigate bias in a job search. This department made the long list based on anonymized CVs, but then based the short list on full, non-anonymized applications. It includes suggestions for what they would do differently in the future, and made me think of UConn EEB’s efforts to do a gender blind search.

From Jeremy:

The latest issue of the Ecological Society of America Bulletin has a bunch of short pieces by prominent ecologists talking about old papers that influenced them. All the pieces are open access, I believe. And Caroline Tucker of The EEB and Flow writes about papers that influenced her here.

Also from the latest Bulletin: how to plan for safe field work. Also includes discussion of lines of authority and power relationships.

And one more from the Bulletin: ecological papers that are rejected without review commonly end up getting published in similarly-selective journals. Which doesn’t necessarily prove the original editors wrong. Maybe they were right and subsequent editors and reviewers were wrong, or (more likely) maybe people just differ in their opinions. But it does prove that papers receiving editorial rejections often are not obviously worse (on any dimension) than papers that get sent out for review. Which is a problem, since many journals say that they only reject papers without review if those papers are obviously uncompetitive for publication. The claim is that rejection without review just saves everyone time, because those papers would get rejected anyway. Not so. In an ideal world, I think rejection without review would be unnecessary or very rare. But failing that, I personally would like to see selective journals that reject lots of papers without review state a different, and I think more accurate, rationale for doing so. Something like this: “The whole point of a selective journal is to provide filtering, on various grounds including but not limited to technical soundness. A lot of our filtering is done by our editors, without the aid of reviewers, because that’s easier and faster than lining up reviewers. No doubt other editors, or reviewers, would make different filtering decisions, but so what? People’s professional judgements differ, and judgment calls are inherent to any process for filtering scientific results. So if you don’t like the professional judgements of our editors, stop reading and submitting to our journal.” (Note: the linked data don’t necessarily represent a random sample from a well-defined population, but I think they’re good enough data to prompt and inform a blog discussion). (UPDATE: see the comments for some typically-thoughtful and measured pushback from Ben Bolker. Ben correctly notes that editorial rejections for “lack of fit” that eventually get published in an equally-selective journal with a different profile arguably represent editorial successes rather than failures.)

Meg posted on this one yesterday, but I wanted to comment as well. Clancy et al. report results of a web-based survey of field scientists (mostly archaeologists and anthropologists) of their experiences with sexual harassment and sexual assault in the field. It’s a follow-up to an earlier, smaller survey we’ve discussed before. Substantial proportions of respondents reported experiencing sexual harassment and even sexual assault, most commonly trainee women victimized by male supervisors. I don’t think it’s worth getting too caught up in the exact numbers, which could be off for various reasons the authors discuss. I think it’s clear there’s a serious problem that needs addressing, even if we aren’t sure exactly how accurate the numbers are. The biggest take-away for me was that respondents mostly had little awareness of codes of conduct or reporting mechanisms. This seems like something that ought to be at least partially addressable (e.g., see link to ESA Bulletin piece above). I used to be fairly casual about training my grad students and their undergrad assistants for field work and talking about issues that might arise. I guess I felt like I knew them well enough that I could trust them to conduct themselves appropriately. And apparently that attitude is common. But I’m trying to get my act together and do better. See the comment thread in that old post of ours for some good discussion of practical steps you can take as a PI (including from Katie Hinde, a co-author of Clancy et al.).

Say you currently have a long-term academic position. How do you decide whether to apply for another one? A guest poster at Crooked Timber discusses his/her own decisions on whether to apply for several positions, each with their own pluses and minuses. Also discusses the issue of whether to apply to jobs you think you wouldn’t take (or even jobs you’re sure you wouldn’t take), just to get the interview practice or some leverage with your current institution. (Personally, I wouldn’t take an interview someplace if I was sure I wouldn’t take the job, as I wouldn’t want to waste other people’s time, money, and interview slots). Written from the British perspective but most of the issues raised apply more broadly. Though in contrast to the post author, in my experience candidates who don’t precisely fit the job description often are quite competitive.

Economics blogger Noah Smith with a bird’s eye view of changing modeling approaches in macroeconomics. I always find it interesting to compare and contrast what seems to be going on ecology with what’s going on in other disciplines that have some things in common with ecology. Touches on different uses of mathematical models–making quantitative predictions vs. sharpening your intuitions and checking your logic. Questions the value of verbal arguments, especially “classic” ones. That bit has some good lines, about how physicists do not write papers about the Newton-Aristotle debate, or worry about whether their equations capture what some Important Person “really” meant. That bit is definitely relevant to ecology (e.g., this). Concludes with a discussion of the impact of blogs on the direction of the field, suggesting that it’s been modest but positive on balance, but with the downside of injecting too much acrimony and aggression into professional debates. (That last one is a hard one; the optimal level of civility is a tricky issue. And at least in macroeconomics, acrimony also arises from the high political stakes.)

Lots of discussion on the internet this week over how we should think about Richard Feynman, who was a brilliant scientist but also behaved very badly towards numerous women. See here, here, here, and here (and also this old post, which is belatedly relevant). Got me thinking about a lot of things, but my thoughts are still kind of inarticulate, plus they’re not specific to Feynman or to how men behave towards women, so I’ll save them for another time. (ht to the Southern Fried Science post Meg linked to above)

This is old, but it’s still interesting to contemplate: what widespread behaviors, attitudes, or policies will be regarded as immoral in the future? I was wondering about this in the context of science, whether there are current widespread scientific practices that in future will be regarded as immoral, or at least professionally unethical.

Jeremy Yoder on a proposal in the popular press for peer review reform. He’s kind enough to give Owen Petchey and I a shout-out. Also gives a shout-out to Axios Review, with which I’m involved.

Speaking of peer review reforms proposed by Owen Petchey and I, the NSF is experimenting with exactly the same idea for their grant reviews: obliging those who submit grants to do reviews in return. NSF didn’t get the idea from us (others, before and after us, have thought of the same idea independently). But it’s gratifying to see that people whose job it is to make sure that peer review works are thinking along these lines.

Hoisted from the comments:

This is old, but I think I forgot to hoist it at the time: an interesting exchange between me and a commenter on what constitutes “self promotion” online, and whether or not it’s ever a good thing. This is a topic on which people have widely varying opinions. The exchange of comments starts here. Semi-related to the Feynman stuff, since Feynman seems to have been a quite deliberate self-promoter.

10 thoughts on “Friday links: revisiting old papers, life after lab closure, and more (UPDATED)

  1. My main reason for rejecting without review is not because a paper is bad, but because I judge it’s either too technical/narrow. In my experience reviewers are great at assessing whether a paper is technically correct and interesting *to them*, but often worse at assessing whether it’s going to be interesting to a broader audience. If I send a paper out for review and two technical reviewers think it’s fantastic, that puts me in an awkward position as an editor. If I think I’m likely to decide in the end that a paper is too narrow (even if the reviewers love it), then it seems fairer to reject without review. (The counterargument is that the reviewers might surprise me by explaining why the paper is really of broader interest — in which case I will need to work with the authors to make the paper explain its general interest better …)

    • “My main reason for rejecting without review is not because a paper is bad, but because I judge it’s either too technical/narrow.”

      Sure. And I’m sure many editors at selective journals feel that way. But the data reported in the link, and my own anecdotal experience, suggest that we all disagree with one another a fair bit about which papers are too narrow. Which is a big reason why, as an editor, I found it valuable to get external reviews–they broadened my own perspective.

      • There’s also the question of how we assess, post-hoc, that “it’s…..interesting to a broader audience.” Numbers of downloads and citations do not immediately give that information. Nor does mass media coverage, as far as I can tell anecdotally: one of my papers that had a long article devoted to it in a national newspaper, including the online version, has been relatively poorly cited even within my field.

        I’m not at all convinced, speaking as an editor, that editors are any better qualified to judge “interest” than the average reviewer, which the study above reinforces.

      • I’m having a hard time with (possibly because I don’t want to relinquish my position, or self-conception, as an oracle. I’ve now read the Farji-Brener and Kitzberger paper (admittedly didn’t do this before posting my first comment!), and the Arnqvist 2003 TREE paper, and don’t find them thoroughly convincing/have some comments (= nitpicks = self-justifications …)

        F-B&K define eventual publication in an equivalent-quartile paper as an editorial mistake. Fair enough, but … with the exception of _Theor Pop Biol_ (Q2), all of my usual “this would be better in journal xxx” suggestions (MEE, Theoretical Ecology, Oikos, Ecography) are *also* first-quartile, so authors successfully following my advice would be counted as editorial failures — even though I would count them as appropriately diverted.

        Perhaps it helps that my editorial declines are also reviewed by an editor above me, so there are two people rather than one making the decision. In 81 mss. that I’ve handled since 2008, 10 were declined without review; 42 declined; and 29 accepted.

        I haven’t gone through (1) to try to see where those declined papers ended up; (2) to categorize my reasons for editorial rejection (too much work …)

        I still think there’s an argument to be made for efficiency, on behalf of the authors as well: *if* I am actually as good as I think I am at identifying papers that I will reject on the grounds of ‘technicality’ even in the presence of positive reviews, then the authors are better off going quickly to another journal rather than wasting time with the American Naturalist editorial process. (Cue the conversation about cascading submissions/transferring reviews so that less time and effort is wasted …) I could test my “oracularity” by agreeing to send everything out for review and pre-registering the ones I think I will eventually reject anyway, but (1) I could easily cheat and self-fulfil my own prophecy, (2) based on the sample above, it would take about 10 years to get a decent sample size …

      • Good point re: editorial “redirects” vs. editorial rejections Ben. Although my own experience as an author has been that editorial rejects are always on grounds that the paper just isn’t sufficiently interesting, rather than on grounds of lack of fit . As far as I can recall, no editor has ever accompanied one of my editorial rejects with suggestions of an alternative, equally-selective but better-fitting journal, though I could be forgetting some. I also note that your personal editorial reject rate is rather low (Ecology apparently rejects half of all submissions without review these days, and I bet EcoLetts is higher). But my experiences, like yours, constitute a pretty small sample size…

        I’ve updated the post to encourage readers to check in on the conversation.

      • Along the lines of Ben’s comment I think that the criteria used by Farji-Brener & Kitzberger to identidy editorial mistakes (no change in journal-quartile) is far to coarse to measure what they are aiming for. Granted, you can discuss how well measures as Impact factor or SJR (used in this case) can capture journal quality or prestige. Even so, I would suspect that many authors that e.g. aimed for Ecology, Ecology letters, American Naturalist or Journal of Ecology and ended up in Restoration Ecology, Mycologia, Weed research or Journal of Invertebrate Pathology felt a bit disappointed, even though all journals belong to Q1. Most ecologists would probably also agree that going from the first to the second group represents a step down in the journal ecosystem, at least from the point of prestige and generality (but maybe not research quality).

        I also suspect that editorial rejections a mostly used by the journals at the absolute “top” (are there data on this?), so to determine if editors are successful to point to more narrow second-tier journals a much more restrictive critiera should be used. In my mind, checking if papers submitted to the top-10 or even top-5 percentile eventually end up below these groups seems more reasonable. Maybe such an study would show the same thing as Farji-Brener & Kitzberger, but I’m not convinced by the current analysis.

        The main problem is however that Farji-Brener & Kitzberger doesn’t distinguish between generality, broad interest, fit and “technical research quality” (among other things). The paper talks about identifying “…the best works…” and “…accurately assess the overall quality of a manuscript…”, without discussing what these labels represent. I also suspect (without having been part of the process) that the main purpose of editorial rejection is not to identify poor quality but poor fit (determined by subject matter and generality).

  2. The “self promotion” issue really is an interesting one, in part because it means different things to different people. To some, having any kind of public profile as an academic amounts to self promotion: a colleague tells a great story about his PhD supervisor taking up a junior post at an Oxbridge department in the early 1970s. On his first day he was taken to one side in the Senior Common Room by a couple of older colleagues who said “We see that you publish your research. That’s not the sort of thing we encourage in this department…..”

    Clearly those days are long gone! But what actually constitutes “self promotion”? At the recent BES Macroecology meeting this question came up during an open discussion about good practice when applying for postdoc jobs. I made the point that there’s a balance to be struck, and over-the-top self promotion (OTT-SP) is never a good thing, but what is the boundary to OTT-SP? Ultimately, if an individual scientist is not going to promote their own work, no one else is going to do it for them. So for example I see nothing wrong in sending a PDF of a paper to another scientist who I think may be interested in the work, even if I’ve not previously had any contact with them, though I know others who would see that as OTT-SP.

    I’d be interested to know what others think about this: when does self promotion go too far?

  3. Pingback: Poll: What constitutes “self promotion” in science? | Dynamic Ecology

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.