Friday links: the research conveyor belt, in (modest) praise of impact factor, and more

From Meg:

Another excellent post from sciwo over at Tenure, She Wrote about the conveyor belt of research. This relates to an issue I’ve discussed before: on how to prioritize manuscripts and having data go unpublished for lack of time. I haven’t written about making sure I have enough projects at the start of the conveyor belt, and that I don’t forget about some projects that are midway along (uh, everyone forgets about some projects, right?), but those are things I worry about, too. My solution so far is to keep a list of projects going (a shorter list on my whiteboard, a longer one in a Word file), so that I can make sure I rotate through giving them attention. And, of course, some end up getting tossed off the conveyor belt into the compost (to stick with sciwo’s analogy) when they just don’t seem promising enough.

The 9 kinds of physics seminar, which Amy Parachnowitsch pointed out on twitter applies broadly to all kinds of seminars (ht: Amy Parachnowitsch via Morgan Ernest).

This person was so serious about figuring out how to brew coffee that he did blinded experiments on his houseguests, complete with full-on analyses (ht: Jarrett Byrnes)

PhD Comics on TV Science vs. Real Science, which includes pointing out that MythBusters has no replication or control. I’ve posted in the past about how to use MythBusters to teach experimental design, so I certainly agree!

From Jeremy:

The science and engineering academic job market since 1982, summarized in one simple plot. (HT Jeremy Yoder, via Twitter)

The BBC (!) on the Price equation (!). Not sure what the occasion is, or even if there is one. And I think you’d be hard-pressed from the article to figure out exactly why George Price is considered such a genius or why his equation is so deep and important. But I admit those are very difficult things to convey in a short popular article. And I still got a kick out of seeing one of my intellectual heroes getting a shout-out on the BBC homepage, in an article with quotes from people whose work I know and admire. Can’t imagine how they neglected to mention the horror movie based on the Price equation, though (just kidding, probably best not to mention this…)

A new working paper (=non-reviewed preprint) finds that, when a paper is retracted, those co-authors who aren’t eminent suffer a massive decline in the rate at which their previous work is cited. But the eminent co-authors suffer little if any decline in the rate at which their previous work is cited. This may be an example of the “Matthew Effect”, in which eminent members of team tend to be given most the credit for the team’s successes, and little of the blame for the team’s failures. (HT Retraction Watch, which has a brief summary and discussion)

Joan Strassmann is skeptical of the value of grading rubrics.

The EEB and Flow on the effects of the US government shutdown on scientific research. Makes me glad I’m an expat. Although in the comments over there Caroline Tucker asks what’s worse: indiscriminate but temporary disruption of massive amounts of research, or longer-term, targeted degradation of research capacity (the latter is what’s occurring in Canada)?

Ben Haller highlights the importance of getting to know your prospective graduate supervisor before agreeing to join their lab, ideally by visiting their lab in person. The visit should include some time talking to current graduate students without the supervisor present. I’ll note in passing that I insist on this; I don’t ordinarily accept any prospective student who hasn’t visited my lab. That includes insisting that prospective students get plenty of time to talk to my grad students without me around. I also emphasize to my current grad students that I want them to be completely open and honest with prospective students. Just as students don’t want to end up in a lab that’s not a good fit for them, supervisors don’t want to supervise students who aren’t a good fit. Believe me, if you and I wouldn’t get along or are incompatible in some other way, I do not want you joining my lab! It’s in the interest of both the supervisor and the student that they be a good match. And I pay for the visit, of course. It’s a small investment that’s well worth it for the sake of ensuring a good match between me and my students. Of course, as Ben’s post illustrates, not everyone operates the way I do (though many people do), and so some students end up being supervised by people with whom they’re incompatible. Ben asks whether universities ought to respond by formally evaluating faculty performance as supervisors (e.g., by exit interviews of graduate students).

Adam Eyre-Walker (whom I know a bit from my grad student days) and Nina Stoletzki have a new paper in Plos Biology comparing agreement among three ways of assessing papers: post-publication peer review, number of citations, and impact factor of the journal in which the paper was published. They find that these metrics are moderately positively correlated, but that this is apparently due in part to post-publication peer reviewers taking account of the journal in which the paper was published. Publication venue also affects the number of citations as paper receives (as does the number of citations it’s received in the past; papers often get cited because they’ve previously been cited a lot). They argue that all three metrics are very noisy and error-prone measures of “merit”, but that impact factor of the journal publishing the paper is least bad. That’s because the decision on where the paper will be published at least has the advantage of not being biased by publication venue! Their results have implications for the conduct of the REF, the periodic assessment exercise that the British government uses to allocate funds to universities.

Time management tips for academics, here and here. And while it might seem ironic for a blogger to offer time management tips (since many people procrastinate by reading blogs), as Tyler Cowen notes, “Blogging builds up good work habits; the deadline is always ‘now’.”

Ecologist and entomologist Terry McGlynn of Small Pond Science is Scientist of the Month at Elementary School Science. Link goes to an interview with him, aimed at 6-year olds.

From the archives:

Is macroecology like astronomy? (on what it takes to be a successful scientific field when you can’t do experiments, or more accurately can’t do certain kinds of experiments. Probably one of my best posts, which is funny because when I wrote it I thought it was a boring rehash of previous posts. And it has a great comment thread involving Florian Hartig, Fred Barraquand, Brian, other folks, and yours truly)

Advice: the ‘snake fight’ portion of your thesis defense (“Q: Why do I have to do this?” A: Snake fighting is one of the great traditions of higher education. It may seem somewhat antiquated and silly, like the robes we wear at graduation, but fighting a snake is an important part of the history and culture of every reputable university…”) 🙂

9 thoughts on “Friday links: the research conveyor belt, in (modest) praise of impact factor, and more

    • By all means, folks should read Eisen’s take. I have, and I think his take is cogent. But I think the original paper is cogent, too, and deserves to be taken seriously rather than dismissed out of hand. And I do worry that many people will dismiss it out of hand, because it pushes back against what’s now established conventional wisdom, and because authority figures like Jonathan Eisen disagree with it. I know you’re not suggesting that people *should* just read Eisen’s take and assume it to be authoritative because of who it comes from, and I know that you personally did not just read Eisen’s take and assume it to be authoritative. But that’s what some people will do, probably without even consciously realizing it. Venue of publication isn’t the only thing that shapes people’s evaluation of a piece of writing when it probably shouldn’t, or does so without them realizing it. Others include “prominence of the author” and “does the writing agree or disagree with the conventional wisdom, or my pre-existing opinion”. As I’ve found l from personal experience with my zombie ideas stuff, and I’ve discussed in old posts in relation to the work of others, papers that push back against the conventional wisdom, and that are publicly opposed by prominent people, have a real uphill battle to be taken seriously, no matter what evidence and arguments they present.

      On a tangentially-related note: for more lines of evidence that we should take pre-publication assessments of papers especially seriously, see this old post:

      In praise of pre-publication peer review (because post-publication review is hopeless) (UPDATED)

      Which (to go off on even more of a tangent) of course isn’t to say the pre-publication review is perfect or can’t be improved; see these old posts:

      How random are referee decisions? (UPDATEDx2)

      Bonus Friday link: the reproducibility of peer review

  1. Several people have pointed out that I sank their productivity this week thanks to my post of videos for teaching ecology. Maybe I’ll point them to your time management tips to offset that. 😉

  2. On the retraction paper: I wonder if there are similar effects for controversial papers. For instance, in the Nowak, Tarnita, & Wilson evolution of eusociality paper, was Tarnita negative affected by the controversy surrounding that paper (even though it seems like her contribution was the mostly in the SI, given her other work, and not in the actually controversial body of the paper)? In general, does controversy hurt or help (or neither, or more complicated) junior scientists?

    • That’s a really good question, to which I have no idea of the answer. Very bloggable–will have to try to do a post on this. I suspect the answer depends on all sorts of factors–the nature of the controversy, the eventual outcome of the controversy (e.g., which side “won”), the prominence and experience of the other people involved, and much else besides.

      Am trying to think of other examples of scientists who were involved in serious controversies at various career stages, and what happened to them…

      From what I’ve read, the researcher at the center of the recent “arsenic life” controversy has been harmed by it.

      I think Don Strong was pretty early in his career when he was involved in the “null model wars”. He certainly went on to become very prominent, successful, and respected in ecology–which of course doesn’t exactly answer the question of whether that early controversy hurt or helped his career. Don also has been involved in other controversies, such as the mid-90s debate about trophic cascades. And there are other prominent ecologists and evolutionary biologists who’ve been involved in multiple controversies over the years–Dan Simberloff, E. O. Wilson, Dick Lewontin, probably others I’m forgetting or don’t know about. Their examples raise the question of whether there’s a distinctive “career niche” for “controversial scientist”. Not that these people set out to court controversy as a way of furthering their careers! Just that they’re both known and respected (and probably also disliked by some people!) in part because of their involvement in high-profile debates.

      I’m mid-career and I’ve certainly said some controversial things on this blog, though I don’t know that any has really led to a major controversy in the field as a whole. I do think blogging in general, and my willingness to speak my mind even if what I say happens to be controversial, has on balance helped my career in small ways.

  3. Pingback: Does scientific controversy help or hurt scientific careers? | Dynamic Ecology

Leave a reply to Jeremy Fox Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.