Friday links: insulting fish, how to pronounce “niche”, and more

Also this week: RIP keeping up with the literature, many analysts vs. one dataset, going to grad school as a veteran, priming studies continue to not replicate, field guide to social scientists, and more.

From Meghan:

So many possibilities in this thread! Mutton snapper might be my favorite:

From Jeremy:

Biggest annual measles outbreak in Europe in at least a decade already this year. 37 deaths.

Simon Leather argues that reading just to “keep up with the literature” is dead. (ht Meghan) Related: my old post asking “How do you read?” (Meghan’s addition: I sent myself a link to that blog post in July as a reminder to read it, and I still haven’t read it. I think that probably demonstrates Simon’s point and extends it to blogs.)

I’ve linked to more informal versions of this before, but the “many analysts, one dataset” project paper is out now (open access preprint version). 29 teams of analysts were given the same dataset with which to answer the same question, using any statistical analysis of their choice. Good fodder for an intermediate biostats course: how much do debatable analytical choices affect the outcome of your analysis?

Strap in because this next linkfest entry turned into a mini-post…Camerer et al. (2018) repeated (key bits of) all 21 of the social science experiments published in Nature or Science from 2010-2015. About 13 of them replicated (12-14 depending on exactly how you define “replicated”). Effect sizes were about 1/2 of the original effect sizes on average (presumably an illustration of Andrew Gelman’s “exaggeration ratio“). More interesting to me was that prediction markets and surveys of social scientists did an excellent job of predicting which studies would replicate. This is the third study to show that prediction markets do a great job of predicting replicability in the social sciences. Which raises the question, what’s the purpose of further replications? Who’s the audience for them? If social scientists as a group already make good judgments about which published studies to believe and which to be skeptical of (perhaps because they’ve been taught how to do so by recent replication studies!), why go to all the trouble of doing the replications? Is it to try to get a minority of ‘true believers’ to quit doing research that everybody else knows won’t replicate? Or to get those ‘true believers’ to quit recommending such research for acceptance at leading journals? A bit of evidence consistent with that suggestion comes from the Vox piece on Camerer et al., in the form of a remarkable and admirable quote from the first author of one of the studies that failed to replicate: “In hindsight, our study was outright silly…I’d like to think it wouldn’t get published today.” (That’s a quote about a freakin’ Science paper! From just six years ago! Talk about changing your mind!) Or maybe the prediction market participants were an unrepresentative sample of social scientists, and we still need replication studies because the majority of social scientists are still learning what makes for repeatable science? Now I want to see a prediction market for first-time studies, rather than replications of published studies. Are social scientists also good at predicting which planned studies will find an effect? If so, does that mean that prediction markets should be incorporated into grant proposal reviews?

Following on from the previous link: how did bettors predict replicability so well? Camerer et al. speculate that bettors were betting that low p-value studies would replicate, and there’s some evidence for that. And Ed Yong’s piece on Camerer et al. has quotes from psychologists who won money by betting against any study that sounded like headline-grabbing clickbait. But it looks to me like you could’ve done well by betting that priming studies never replicate, whereas everything else does (which might amount to the same thing as betting against clickbait…). The studies that failed the replication attempts were almost all priming studies: washing your hands eliminates post-decision cognitive dissonance, tactile sensations influence social behavior (e.g., touching hard objects increases inflexibility in negotiations), showing people a picture of Rodin’s The Thinker encourages atheism, reading brief passages of literary fiction improves ability to infer emotions, writing about exam worries boosts exam performance, and priming people to trust their intuitions increases generosity. In contrast, by my count only one of the studies that replicated involved priming. That’s in line with previous replication studies finding that priming studies mostly don’t replicate (e.g., see this list).

Continuing to follow on from the previous link…That priming studies mostly don’t replicate has me thinking back to our old posts on stereotype threat (definition: reminding people who are members of a negatively-stereotyped group about their membership in the group, or about the negative stereotype, causes those group members to perform worse on tasks; e.g., girls perform worse on math tests if you remind them that they’re girls, or about the stereotype that girls are bad at math). Stereotype threat studies definitely aren’t clickbait, and in contrast with many priming studies on other topics there are plausible reasons to expect stereotype threat to replicate. But on the other hand, many published studies of stereotype threat are non-preregistered priming studies with small sample sizes, and one recent meta-analysis finds evidence for serious publication bias in the literature on stereotype threat. So I dunno–should we expect published priming studies of stereotype threat to replicate, or not? I’m only aware of a few preregistered replications of stereotype threat priming experiments, which so far mostly haven’t replicated; see here, here, here, here, and here. More preregistered replications seem to be in the pipeline. (Aside: note that the scientific hypothesis being tested in these stereotype threat priming experiments is much narrower and more specific than “negative stereotypes have bad consequences” or “members of certain groups face disadvantages affecting their academic performance”. In that respect, all priming experiments on stereotype threat, whatever their outcome, have only a limited ability to speak to the undeniable real-world consequences of stereotypes writ large.)

Going to grad school as a veteran. (ht @dandrezner)

Andrew Hendry and Dan Bolnick with tips on promoting interaction at academic conferences, for faculty, postdocs, students, and conference organizers. Good suggestions, though I do think Andrew is overgeneralizing from his own example a bit when he recommends that profs set aside large blocks of time to stand in a public area and chat with anyone who passes by. It’s great that this works for Andrew, and I know some other people for whom it works. But they’re all famous senior people who have many friends. Personally, I doubt that if I just stood by the registration desk at the ESA meeting that I’d have people stopping to chat with me all day. And I say that as a reasonably well-known ecologist who’s been attending ESA almost every year for over 20 years.

The arrival of social science genomics. Good balanced thoughtful essay. (ht @kjhealy)

A field guide to social scientists. First person to produce one for ecologists gets +1000 Internet Points. 🙂

No plan survives first contact with the enemy. 🙂

Dan Bolnick with a limerick on how to pronounce “niche”. 🙂  Our old poll shows that 65% of people pronounce it wrong. [runs away] 🙂

7 thoughts on “Friday links: insulting fish, how to pronounce “niche”, and more

  1. People have really grown to like our linkfests over the years. They used to be much less popular than the average post, now they’re much more popular…

  2. Interesting story on vets. My undergrad advisor in geology was a Marine in Viet Nam. He’s now retired. It would be interesting to hear from people in that earlier generation of vets as well, and about how they coped with virtually no mental health support.

  3. Re insulting fish names. Freshwater mussels may have the lead on colorfully descriptive common names, which I’m sure could be fashioned as insults. Heelsplitters, pig toes, elk toes, pimple-backs, warty-backs, muckets, fatmuckets, monkey faces, slop buckets and more. This made me envision a version of a Shakespearean or Far Sidean insult trashtalkfest. Purple Warty-Back! Yellow-back mucket!

    Some of the historic common names of mussels are now deeply offensive and unrepeatable, although they obviously weren’t considered so in a 1915 report from the U.S. Department of Commerce, Bureau of Commercial Fisheries report by Robert Coker “The Common and Scientific Names of Fresh-water Mussels.” (https://www.biodiversitylibrary.org/item/80563#page/3/mode/1up). Even ‘mucket’ has questionable etymology according to some sources.

Leave a Comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.