Also this week: yes of course Jonathan Pruitt is still collecting retractions like they’re going out of style, women vs. natural philosophy, Coursera IPO, in praise of arbitrary journal formatting requirements, people-first labs vs. results-first labs, oceanography vs. math, Excel will never die, and more. Lots of good stuff this week!
I continue to wonder why more public health officials and news media (not just politicians) got Covid-19 public health advice so wrong a year ago. Because some people did get it right at the time. Ok, hindsight is 20/20 and all that. But still, I would be interested to read a deep dive on this.
Interesting news story on the data access and sharing policies of GISAID, the biggest and best database of SARS-CoV-2 sequences. If you think that databases need to have policies that respect the rights of data contributors, well, GISAID seems to be one actually-existing example. One that seems to have some significant downsides as well as upsides for data users. Such as, um, users getting phone calls from GISAID staff who refuse to identify themselves, lecturing those users on the virtues of GISAID and the vices of public access databases (?!) I would be very interested to hear comments from those who know more about this than me (because I know basically zip) as to how one might unbundle what everyone seems to agree are the good things about GISAID from the bad things. Or is that just impossible?
Athene Donald on women as natural philosophers in 17th century Britain.
Coursera is filing for an IPO. Their filings give some insight into the current state of the online learning business. Coursera seems to be betting that the future is short courses and stackable certificates, not MOOCs.
If you liked someone’s paper or talk, tell them about it. You’ll make their day.
The latest #pruittdata retraction just dropped. It’s Lichtenstein et al. 2016 Behav Ecol Sociobiol. It’s for anomalous strings of repeated sequences of observations, and formulaically generated observations, that Jonathan Pruitt couldn’t explain. Pruitt didn’t agree to the retraction, because of course he didn’t. Lead author James Lichtenstein and co-author Nick DiRienzo each spent something like 150-200 hours investigating the data, writing reports on it, and evaluating Pruitt’s purported explanations for the anomalies. Here’s Nick DiRienzo’s detailed PubPeer writeup of the anomalies. Kudos to all the co-authors for doing the right thing; it sucks that you had to do it.
“And the hits just keep on coming.” You said it, Tom. Grinsted et al. 2013 Proc B also was retracted this week. You should click the link and read the retraction notice, it’s exceptionally detailed. Kudos to co-authors Lena Grinsted, Virginia Settepani and Trine Bild for bringing the issues with the data to the journal’s attention and requesting the retraction. Pruitt is now up to a dozen retractions. That doesn’t count the various corrections and Expressions of Concern, or any of the many papers still under investigation. Protip: try not to do things that result in your Wikipedia page looking like that last link.
Whoops, make that 13 retractions. It’s hard to keep up… Pruitt et al. 2011 Behav Ecol Sociobiol just bit the dust. Kudos to co-authors Nick DiRienzo, Simona Kralj-Fišer, J. Chadwick Johnson, and Andy Sih for fighting to get this paper retracted as soon as they discovered the data were full of anomalies. Want to know exactly what the anomalies were, how Jonathan Pruitt tried to explain them (totally unconvincingly), and what it’s like to go through a back-and-forth with him? Here’s co-author Nick DiRienzo with the blow-by-blow. Amazingly, Pruitt still has a ways to go before his name shows up here. (Note that this paragraph was updated a few hours after posting to include Andy Sih, whom I inadvertently left out originally. My bad.)
Political philosopher Olúfẹ́mi Táíwò on carbon removal technology. Interesting to read a political philosopher’s perspective on this. A sample quote, to encourage you to click through:
I feel like, as a philosopher, I can say with some authority that folks are really just overthinking this.
This recent tweet expresses Táíwò’s views more colorfully:
Heh. Writing in Plos One, Tiokhin et al. use toy models to argue that there’s a point to arbitrary journal formatting requirements and long review times: they make submission costly, and so disincentivize authors from submitting low-quality work to high-quality journals. So if you, like Brian, are in favor of slow science, you should be in favor of arbitrary journal formatting requirements and long review times. 😉
Oceanography is becoming much less mathematical than it used to be. Ken Hughes on why that is, and why it’s (mostly) a good thing. Or at least an inevitable thing.
Kareem Carr on people-first vs. results-first labs. Good thread. Argues that we need both kinds of labs, not that one kind is always better than the other. Now I’m thinking about whether/how one could run a lab with a “mixed strategy”.
A compilation of the earliest known uses of various mathematical symbols. The = sign only goes back to 1557?! I did not know that!
Is the financial crisis at Laurentian University a “canary in a coal mine” for other Ontario universities, or a symptom of provincial underfunding? Alex Usher crunches the numbers, finding that the answers are “no” and “no”.
Tomorrow Excel never dies.
And finally, a bar graph of weekly mobility data from New York City resembles New York City. And a scatterplot of inflation rate vs. unemployment rate in Japan resembles Japan:
In the comments, please share other examples of data that resembles the study units from which they were collected. Surely somebody out there has a multi-modal histogram that resembles the hilly study site from which the data were collected, or a scatterplot of snake-related data in the shape of a snake, or etc. 🙂 No, the datasaurus scatterplot doesn’t count.