Friday links: (re)claiming scientific identity, RIP Joanna Cole, and more

Also this week: questioning whether tweets lead to citations, COVID-19 vs. scientific societies, and more.

From Jeremy:

The Magic School Bus author Joanna Cole passed away earlier this week. She was 75. Countless kids, including my own, learned a ton of science from those books and the associated cartoon series. Schoolteacher Ms. Frizzle is one of the great creations of children’s literature, and I’m sure will long outlive her creator. RIP Joanna Cole, you will be much missed.

Ashlea Morgan on (re)claiming scientific identity. Eloquent piece that will resonate with many.

Ed Yong on what happens if there’s a second pandemic in the US before the COVID-19 pandemic is over.

How scientific societies are coping financially with conferences getting canceled or moved online. The SSE features.

The posthumous reckoning with the legacy of hugely-influential psychologist Hans Eyesenck, who’s been credibly accused of serial fraud.

96 bargaining unit faculty positions at the University of Akron have been permanently eliminated. This is on top of 77 recent voluntary retirements or resignations among full time faculty, if I’ve done my math right. Dozens of staff and contractor positions have also been permanently eliminated. My uni’s undergoing major cuts too, though so far faculty positions haven’t been eliminated except via voluntary retirements or resignations. The same very sad story seems likely to be repeated in many places in the months and years to come.

Recently, I linked to a randomized experiment showing that new papers that are widely tweeted subsequently get cited more often than papers that aren’t. But Phil Davis took a close look at the paper (which I didn’t do), and has a bunch of questions. He couldn’t find tweets for many of the papers that purportedly were tweeted, the authors didn’t provide the data despite repeated requests (the journal has no data sharing policy), and some of the statistical analyses are at best poorly explained and at worst wrong. Phil subsequently put in additional work to reconstruct and reanalyze the dataset himself. Turns out that the papers used in the study don’t match what’s claimed in the methods (the methods claimed that reviews and editorials were excluded, but actually they comprised 16% of the papers!) And there’s actually no significant difference between tweeted and non-tweeted papers in terms of how often they’ve been cited in the 2+ years since publication, not even close. Oh, and one of papers included in the study was an editorial that the lead author published in…three different journals. In light of all this, I regret linking to this paper when it first came out. It sure looks to me like I fell for clickbait. I should’ve either looked at it more closely before linking to it, or used better heuristics to decide whether it was worth linking to. It’s too sloppy a paper to take seriously. But it’s too late now. The paper said something that a lot of people on social media wanted to hear, or at least found interesting, so it got shared widely. It’s too late for the serious doubts that have now been raised to get much traction. Someone should redo this randomized experiment properly, and report the results properly.

9 thoughts on “Friday links: (re)claiming scientific identity, RIP Joanna Cole, and more

  1. Unfortunate about the study on Tweets. We’ve all accidentally fell into that trap before; at least someone took the time to dig through the details.

  2. Thanks for following up on the Tweet study. My priors about these things leaned weakly in that direction already, but your linking to it shifted me a bit further. Of course, I don’t think I even clicked through much less looked at it in any detail, yet I think it did nudge my mental thinking. I worry that I make this kind of error all the time in how I conceptualize science/take in information more generally.

    A good example at how we can all be a bit more vigilant in what we let influence us. Or at least be honest when we are, as we will undoubtedly be, less vigilant.

    Hopefully there is some follow-up here in some form. Surely this is an interesting research question, though possibly a difficult one.

  3. The opening sentence of the Results part of the Abstract should have rung alarm bells:

    “When compared to control articles, tweeted articles achieved significantly greater increase in Altmetric scores (Tweeted 9.4±5.8 vs. Non-Tweeted 1.0±1.8, p<0.001)"

    Given that Altmetric scores include in their calculation the number of times a paper is tweeted, it's the equivalent of saying "When compared to a control group of rats that we didn't feed, fed rats achieved significantly greater increase in weight"….

    • Yes, as Phil Davis points out, Altmetric scores are based in part on how often a paper was tweeted. So of course papers that are randomly assigned to be tweeted are going to have higher Altmetric scores than papers that are randomly assigned to be untweeted!

  4. Not directly on the topic of impact of tweeted vs untweeted papers, I did have an amusing observation. At GEB we published a corrigendum. It was a nothing-burger corrigendum – added a middle initial for one author, added an ORC ID for one author, corrected one reference that had same first author and year but wrong reference. That’s it. Because of the RSS feed it got tweeted out with just the title not the word corrigendum or anything. That link to the corrigendum got retweeted almost as many times as the original paper. This nothing-burger corrigendum had excellent altmetrics!

    Point being it made me realize just how shallow the engagement with papers on twitter is. None of the 20+ retweets bothered to follow the link or it would have been obvious it was a content-free corrigendum they were forwarding to the world. None of the 20+ retweets were actually tracking and on top of the body of literature in their field enough to realize that this same paper title had been published 6 months earlier. Tht experience left me with the impression that at most 5-10% of the people tweeting a paper ever even read the abstract. That kind of shallow engagement is not the kind that turns into citations or other measures of impact (other than the retweet component of Altmetrics).

    • When it comes to deciding what to pay attention to, shortcuts and heuristics (both conscious and unconscious) are unavoidable. Your amusing observation indicates that some people, when deciding what to retweet, use the following heuristic: “retweet anything published in GEB with a title that sounds interesting to me”. In most cases, that’s a reasonable heuristic. It will mostly result in you retweeting good papers, because GEB is a good journal that receives lots of good submissions from good scientists, and evaluates those submissions carefully on various dimensions. But all heuristics have their characteristic “failure modes”–the characteristic ways in which they go wrong, in the cases when they do go wrong. One of the failure modes of “retweet any GEB paper with an interesting-sounding title” is “once in a while you’ll retweet an corrigendum”.

      The interesting question to ask is “which heuristics are better than others, for which purposes?” Old fart that I am, I’m tempted to say that too many scientists on social media use bad heuristics when deciding what to share. That too often, they’re either sharing shoddy clickbait, or else they’re failing to add much value because they’re sharing solid stuff that everybody on social media already knows about. But I dunno. One constant of history is that the filtering problem–deciding what to pay attention to–is always growing, because there’s always more and more stuff one could pay attention to. And another constant of history is that people who solve the filtering problem using one set of heuristics think that people who use other heuristics are Doing It Wrong. So maybe my complaint about what scientists retweet is the equivalent of complaining about “kids these days”. Which is something people have been complaining about for as long as there have been kids, regardless of what “kids these days” are actually like, or what kids were like back in the “old days”.

      Relevant old post: https://dynamicecology.wordpress.com/2013/04/08/selective-journals-vs-social-networks-alternative-ways-of-filtering-the-literature-or-po-tay-to-po-tah-to/

  5. Pingback: Recommended reads #177 | Small Pond Science

  6. Perhaps too shoddy to be useful pedagogically, the author of this Tweet notes that this piece could be useful for ‘seeing how many statistical fallacies you can find.’ I’m really too inept to be able to judge the details much for myself, but some of the claims seem pretty obviously extreme…

Leave a Comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.