Friday links: petri dish art, no one reads your papers, the value of pseudonyms, and more

From Meg:

Last week, Nature published a commentary that they have now apologized for. There were many good posts in response, but I want to focus instead on a series of posts that emerged after an editor at Nature tweeted the identity of a very prominent science blogger, Dr. Isis. This post by Michael Eisen explains what happened a bit more and, more importantly, explains the evolution of his views. It provides a good explanation of the importance of pseudonyms in online interactions. It’s powerful in part because he has previously been frustrated by Dr. Isis, but has come to understand why she and others write under pseudonyms. In a similar vein (and in response to the same incident), here is a post from DrugMonkey that explains that the protection afforded by pseudonyms depends on the community. And, finally, thanks to Terry McGlynn, I saw this excellent post by tressiemc that points out that “The penalty for raising hell is not the same for everyone.”

In the past week, I read two powerful blog posts on poverty, privilege, and academia, one from Sarcozona and one from katiesci. It’s common for academics to joke about not having money, and, especially, to joke about living in poverty in grad school. But, of course, for people who are truly impoverished, it’s not a joking matter. Sarcozona’s post includes tips for PIs.

From Jeremy:

I’m late to this, but that’s ok. A little while back ecologist Simon Goring wrote a post reflecting on how no one reads his blog, and why he keeps blogging anyway (ironically, the post went viral). In response, The Serial Mentor noted that no one reads your papers either. So why do you keep publishing papers? Besides “to get/keep my job, and to get grants”? Interesting post, makes a number of good points I haven’t seen made elsewhere. And there’s lots of other good stuff at The Serial Mentor, I suggest giving it a browse. Like me, the author got into science just as the web was taking off, and seems to have a mix of old-school and new-school views on issues like scientific publishing and peer review (a different mix than me, though).

Speaking of stuff people don’t do…Athene Donald complains that lots of people read her blog, but no one comments. Instead, the conversation about her posts mostly takes place on Twitter. And for her, this is a change–her readership has grown over time while the comments have dried up. I have to say I’m surprised by her experience, since ours has been different. Tweets, readership, and comments on our posts have all grown in rough parallel. They also fluctuate in rough parallel: widely-retweeted posts tend to get more comments, though the correlation is very loose. So my sense is that the Twitter conversation about our posts isn’t competitive with our comments. At least, I hope that’s true, and that it remains true! The comment threads are one of the best parts of this blog. I’d be really unhappy if they went away, even if in return we got scads more retweets and pageviews. (ht Tim Poisot, ironically via a comment)

The Molecular Ecologist has a nice interview with evolutionary geneticist Pleuni Pennings. She talks about how you don’t have to be an outdoorsy type to fall in love with biology. About the evolution videos she produces. About how to keep from becoming too narrow (her advice there echoes that from philosopher Dan Dennett in this wonderful little essay). And more!

Acclimatrix of Tenure, She Wrote with a fine and very personal post on being an academic from a non-academic, working class family. Likely to resonate with anyone from a similar background (and I found it well worth reading even though my own background is quite different). A sample:

Class mobility is not just a process of struggling to fit in amongst your new peers, but also feeling like you’re betraying your roots. It’s really, really difficult to successfully walk on both sides of an invisible line.

Here’s one radical antidote to researcher degrees of freedom: a “deterministic statistical machine“. That is, easy to use statistical software that would give the user few options. You just input your data and specify your question, and depending on your question the software performs an appropriate pre-specified analysis, along with assumption checks. The software would then automatically upload the results to figshare. And if you try to fiddle with the data yourself, or try multiple different analyses, or otherwise intentionally or unintentionally p-hack, the software would automatically generate warnings and a “paper trail” that get uploaded as well. The suggestion is aimed at users who don’t know much about statistics, and I bet it’s feasible. It’s interesting to think about the broader principles here. As the author notes in another post, any statistical method that is used “at scale” (i.e. by hundreds of thousands or millions of users, as simple classical frequentist statistical tests are) is going to be widely abused. Which suggests a need for “scalable” ways to prevent abuse-such as software that enforces the use of what’s sometimes called “cookbook statistics”. “Cookbook statistics” is widely bemoaned, and not without reason. But when it comes to actual cooking, there’s a range of products available for every level of expertise, from “I barely know how to work a microwave” to “I’m Gordon F***ing Ramsay“. Presumably, statistical software also needs the equivalent of prepackaged salads, instant noodles, and even pizza delivery.🙂 (Which raises the question: what’s the statistical equivalent of eating out at Per Se? Hiring Bradley Efron as your statistical consultant?)🙂

And finally, using bacteria with different pigments to make petri dish art! And by “art” I mean “a quite nice copy of a Van Gogh”! (ht @RELenski)

One thought on “Friday links: petri dish art, no one reads your papers, the value of pseudonyms, and more

  1. First of all, great post and summary of the happenings of this week. Nice to look back at revisit my own opinion about what happened.

    However, I’m not too fond of the whole “prepackaged salad statistics” thing. If someone doesn’t know enough statistics to know what test he/she should run, how can he/she possibly interpret the results with a critical eye? If think that automation should not reach a point where the user doesn’t need to know what is happening (that’s when things stop being mathematics and become magic). Instead, wouldn’t it be better to offer scientists better training in statistics? Maybe something similar to what the folks at Software Carpentry are doing for programming.

    Checking for p-hacking is easy enough if journals and reviewers demand that the data and exact software/steps/parameters used be published as well. After all, isn’t that what the Methods section is for?

Leave a Comment

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s