Friday links: p-hacking=posterior hacking, #montypythonscience, forgotten co-authors (oops!), and more

Also featured this week: the internet vs. English, English vs. English, and tea vs. lakes…

From Meg:

An interesting post on stammering (a.k.a. stuttering) and the academy. According to the post, 1% of adults have a stammer or stutter, which means that this is something that is quite likely to come up during teaching, mentoring, and all other day-to-day interactions we have. There’s lots of interesting and useful information in the post, including about how to be considerate of the needs of people with a stammer.

Two of my favorite topics (lakes and tea) in an xkcd “What if”! (Specifically: “What if we were to dump all the tea in the world into the Great Lakes? How strong, compared to a regular cup of tea, would the lake tea be?”)

Another fun one: the #montypythonscience twitter hashtag emerged this week. A couple of particularly good ones:

From Brian:

I’ve had coauthors I didn’t know (usually when they provided data negotiated through another co-author), but I’ve never been on a paper that forgot people off the list. That has to be embarrassing.

From Jeremy:

Think that “p-hacking” is only a problem for people using frequentist statistics, not for Bayesians? Think again. If you’re p-hacking, you’re also posterior hacking. (ht Andrew Gelman)

Speaking of p-hacking…here’s an online guide to common statistical mistakes committed by scientists. Written in a non-technical way, and appropriate for undergraduates in introductory biostats courses. I’ve only read bits of it, but what I’ve read so far is quite good. Definitely thinking of referring my own biostats course students to it.

Zombie ideas alert! The Edge asked 174 famous scientists and intellectuals (including a disproportionate number of psychologists) “What scientific idea is ready for retirement?” Check out their answers here. On a quick skim, answers from ecologists and evolutionary biologists include “privileging science over scientists” (that from Kate Clancy, in one of the best entries I saw) and “inclusive fitness” (from I bet you can guess). Worth a look, but you’ll need to skim to find the good nuggets. There’s lots of rather obvious stuff being pumped up as a bigger deal than it is–declarations that straw-man versions of the Modern Synthesis and frequentist statistics are dead never go out of style, apparently. And there are lots of intentionally-provocative or overbroad headlines followed by rather mundane answers: “evolution is true”, “statistical independence”, “certainty”, and “science” are listed among the ideas we should retire! A lot of the “scientific” ideas suggested for retirement are actually philosophical positions. If you’re looking for focused critiques in the spirit of my own attempts at zombie-slaying, you’re going to be disappointed. And there are a few entries that are seriously confused or just plain wrong. At least two psychologists working on altruism apparently can’t distinguish proximate from ultimate explanations. And in the very last entry (saving the worst for last?), science author Kevin Kelly makes a really embarrassing, undergraduate-level mistake in suggesting we retire the notion of “random” mutations. Unfortunately, he doesn’t know what “random” means in this context. Kevin, in the unlikely event you read this: “random mutations” doesn’t mean mutations occur uniformly, or without statistical patterns; it means that mutations don’t occur because of their fitness effects. (ht Chris Klausmeier)

Here’s the newish (a few months old) blog Data Colada. It’s by three psychologists who’ve been leading the debate within psychology on topics like reproducibility, researcher degrees of freedom, study preregistration, etc. I’ve only had time for a skim, but Data Colada looks really good. Like Dynamic Ecology, they’re out to use blogging as a way to start and contribute to serious scientific discussions, with an emphasis on analysis as opposed to mere opinion. And much of what they have to say isn’t just of interest to psychologists. For instance, here’s their interesting suggestion for how reviewers can oblige authors to disclose “researcher degrees of freedom”. Here’s a nice post on why authors should want to preregister their studies and hypotheses (because reviewers and readers will be really impressed when the preregistered predictions are upheld). Lots of other good stuff too, and all the posts are short and clearly written.

The internet vs. “Standard Written English”. Argues that the internet is winning, and that that’s a good thing. I suspect that the divide the author identifies is part of what underpins different scientists’ reaction to blogging, or to what they imagine blogging to be. Here’s a quick litmus test to find out which side of the divide you fall on: Which is better, my blog posts on zombie ideas, or my TREE paper based on those posts? (ht counterparties.com)

You’ve probably heard the old joke that the US and the UK are two nations separated by a common language. Speaking as an American who did a postdoc in the UK, there’s a lot of truth to that. My PhD supervisor actually warned me about this before I started my postdoc. So for any American readers planning to talk to British academics, here’s a handy dictionary for translating phrases commonly used in academic conversations into their American equivalents. It’s very funny, but also very true. I myself have an amusing anecdote about the word “interesting” that I may share at some point. 🙂 (ht Retraction Watch)

Just for fun: a bestiary of academic “trolls”. Written for economics, but most species are also found in other fields. 🙂

And finally, the population distribution of the continental US, in units of Canadas. Fun little map. (ht counterparties.com)

13 thoughts on “Friday links: p-hacking=posterior hacking, #montypythonscience, forgotten co-authors (oops!), and more

    • Are you trolling me? 🙂

      In seriousness, I agree that the statistical practices the author describes should be retired. But I dunno, that seems to me to be too obvious and broadly-shared claim to be interesting. Which maybe just says something about my own expectations for this sort of exercise. I was hoping to see people argue for the retirement of ideas that are widely believed and that lots of people *don’t* want to see retired. Like Nowak vs. inclusive fitness, for instance. I don’t agree with him, but I give him credit for aiming at the sort of target I was hoping people would aim at.

      • As a zombie slayer I thought you’d be in favor of an attack on ideas that “everybody” agrees are wrong but yet somehow almost everybody still follows them?

      • Well, I’m not sure I agree that “everybody” uses null hypothesis tests and p-values in the mindless, ritualized way the author describes. The stuff I read mostly doesn’t seem to fall into that trap, but of course I read a small and highly non-random sample of all the science in the world!

        I mean, yes, it is true that most everybody in ecology uses null hypothesis tests (including in the papers I read). But there is a time and place for those, and I wouldn’t venture to guess what fraction of all published null hypothesis tests in ecology, or science in general, are mindless or pointless.

        Having said that, you do raise an interesting and broad issue–about how practices (scientific or otherwise) can sometimes persist widely even when everybody who participates in them thinks they’re a bad idea. Various people have argued that various aspects of scientific publishing and evaluation have this character–for instance the use of impact factors. But I confess that I’m suspicious that in many of those cases “agreement” that the practice in question is bad isn’t actually as widespread as its detractors believe or wish. So that it’s possible that the practice persists not despite the fact that “everybody” hates it, but because only *some* people hate it.

  1. Simply Statistics is not impressed with the Edge folks who argue for giving up on various statistical and experimental design concepts:

    http://simplystatistics.org/2014/01/16/edge-org-asks-famous-scientists-what-scientific-concept-to-throw-out-they-say-statistics/

    And while I agree with this generally negative take, I did have a sobering thought: what fraction of ecologists see my own zombies ideas pieces as the equivalent of Edge authors trying too hard to be provocative/contrarian/deep and suggesting that we discard “statistical independence” or “p values” or “science” or whatever.

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.