Also this week: the best kind of research proposal, peak Plos One, econometricians vs. R^2, Jeremy vs. merpeople, and more.
A claim that US tenured and tenure-track research university profs teach far less than most people think, and that they won’t be able to get away with having adjuncts and TAs do so much teaching for much longer. Notes that “research universities” in the contemporary sense are a pretty recent and quite possibly temporary invention. Provocative speculation, though light on data for my taste. I don’t really buy the argument, but I’m not sure what to think, would welcome comments.
Speaking of research universities possibly being doomed: what’s really happening with tenure at the University of Wisconsin. Short FAQ from a Wisconsin prof.
Imagine that a variable Y truly is a function of predictor variables A, B, and C. Regressing Y on A alone ordinarily will give you a biased estimate of the Y-A relationship. You might think that, if you added in B as a second predictor, that you’d have less bias in the estimated effect of A. You’d be wrong. In fact, adding B into your model can increase both the bias and variance in the estimated effect of A, unless the addition of B is “balanced”. One more reason why adding terms to your statistical model might well constitute statistical machismo.
Wait, econometricians don’t care at all about R^2 in their regression models? All they care about is identification of putative causal effects? Always interesting–and in this case a bit puzzling–to see how the other half
lives does regression.
Statistician Jeff Leek argues that checklists of good data analytical practice are unhelpful if the people following those checklists are poorly trained. I have mixed feelings. On the one hand, I agree 100% that you can’t routinize something that requires a lot of thoughtful decision-making, and that trying to do so might be dangerous because it encourages people who don’t know what they’re doing to think they do. On the other hand, I think giving trainees checklists can be a useful training technique. And it’s laughably unrealistic to argue that every paper should have a PhD-level statistician involved (and if that’s not what he’s arguing for than I’m not sure of his point, since most scientists do have some statistical training). Indeed, it’s so unrealistic that I don’t think it’s useful as an aspiration or even as a hypothetical “ideal”. And his analogy to heart surgery is pretty poor. Contrary to Leek’s implication, the vast majority of bad data analyses and bad data analysts cause waaaaay less harm than a bad heart surgery or a bad heart surgeon (because a paper can’t cause harm if no one reads it). Related: this old post on checklists as a tool for avoiding non-statistical mistakes in ecological and evolutionary research.
How to network at big conferences. From a sociologist, but much of it applies to big ecology conferences too. Sample line:
Attending an academic conference is like being a teenager again. This is why they can be so awful. You hang around trying to attach yourself to a group—preferably the cool kids, but in the end any group will do—and then these groups hang around waiting for something to happen.
Should you get a blood transfusion? Good fodder for a stats class.
Have we reached peak megajournal? Or at least peak Plos One?
Self-promotion alert: I did a radio interview with my university’s student radio station on whether we could use artificial selection to evolve merpeople. The show is basically the radio version of xkcd’s “What if?”. They wisely only included a couple snippets from me and mostly went with less rambling input from my Calgary colleague Jana Vamosi and Thomas Cullen from Toronto. It’s only 12 minutes, you can listen here.
Of course somebody invented a knife that toasts bread as you slice it.🙂
And finally, I know I’m late with this but here it is anyway because it’s lovely: the best kind of scientific proposal.🙂