Friday links: regression to the mean vs. the Dunning-Kruger effect, #overlyhonestsyllabus, and more

Also this week: the talks from the ASN virtual meeting, nature vs. nature photographers, Slate Star Codex reborn, and more.

From Jeremy:

This sounds like trolling, but it’s from Stephen Heard who is definitely not a troll, so: what if going to online-only classes (as many colleges and universities have done) has made those classes better? I doubt it–I think Stephen’s overestimating the proportion of students for whom the upsides outweigh the downsides. But go read Stephen and see what you think.

The Dunning-Kruger effect is not a thing. It’s just regression to the mean. Will add this to my list of statistical vignettes for teaching intro biostats.

All the talks from the recent ASN virtual meeting “in” Asilomar are now available on YouTube.

Writing in Science, Laura Stark reviews Janice Nimura’s new biography of Elizabeth Blackwell, the first woman to receive a medical degree in the US. I know nothing of Blackwell, who sounds like a difficult figure to place into any of the usual heroic overcoming-of-obstacles narratives we like to tell about scientific and medical pioneers. Elizabeth Blackwell was indeed a pioneer who overcame massive discrimination–but she herself was a pro-slavery racist who believed that women and men weren’t equal. She also seems to have benefited greatly from collaborative work with her sister, while also publicly disavowing her sister.

Popular blog Slate Star Codex is back under a new name on Substack. Here’s the backstory. The author has decided to reveal his name (which you’ll have to click through to discover).

I’m tempted to write something like this in my next syllabus just to keep the folks tasked with approving syllabi on their toes. πŸ™‚

It me. πŸ™‚

It also me. πŸ™‚

I’M EVERYWHERE. πŸ™‚

Multa novit vulpes. πŸ™‚

5 thoughts on “Friday links: regression to the mean vs. the Dunning-Kruger effect, #overlyhonestsyllabus, and more

  1. The Dunning-Kruger effect sounds true. In fact, it sounds truer than the truth. How might remove this fable? Editing the wikipedia page would be a start but wikipedia bases truth on what the consensus of lay people think is true and the consensus is the Dunning-Kruger effect is real, artificial sweeteners are safe, WMDs were in Iraq, and Anna Nicole married for love.

    • That’s not, of course, how Wikipedia works. The linked article has a couple journal articles linked, which’d be enough for Wikipedia to note the existence of the effect is disputed, which is now does: https://en.wikipedia.org/wiki/Dunning%E2%80%93Kruger_effect#Mathematical_critique

      And the talk page reveals some attempt to understand whether its existence is disputed or debunked at this point, which is reflected in the article. If one is able to navigate that (e.g., with recent published reviews or something) it could help a lot. It’s in a pretty strong state of flux at the moment.

  2. The Dunning-Kruger effect is not a thing. It’s just regression to the mean.

    I don’t think it has anything to do with regression to the mean; that was an extra bit from the writer which was, as far as I can tell, irrelevant. (I found the article a bit annoying because it never actually explained the statistical argument; I had to read the linked articles and do some simply simulations and plots to figure it out.)

    The real problem is that test scores are bounded by 0 at the low end and 1 (or 100) at the high end. So if you assume that all people are equally accurate (or confused) about how well they do on your test of their ability at some task — and are equally unbiased — then people who score around 0.5 will give you estimates in some (symmetric) range scattered about 0.5. People who score very low will give you estimates “biased” upwards, because they can’t estimate a score 1.

    I’m not sure it necessarily means that something like the Dunning-Kruger effect is nonexistent, but it does imply that the standard methods that have been used to measure it can’t discriminate between a real effect and the simple effects of upper and lower bounds on the possible measurement values.

    • I don’t think hard bounds on the range of possibilities are essential to the argument Peter. After all, regression to the mean is still a thing even for (say) data drawn from a normal distribution, which is an unbounded distribution. Stated loosely, regression to the mean is just “if you happen to sample an observation that’s far from the mean, the next observation you sample will tend to be closer to the mean than the previous one”.

      • The point is that regression to the mean is not what’s going on here, nor are repeated observations. (Neither of the two papers linked to in that post mentions regression to the mean.)

        Consider people who scored 0.5 and estimate their score with a dispersion of 0.2: they’ll report estimates mostly within 0.3-0.7, with a mean of 0.5. But people who scored 1.0 will report estimates mostly within 0.8-1.0 (and never more than 1.0), so the mean of their estimates will be less than their true score. The opposite effect happens for people whose true score is 0. The effect gets weaker as true scores get closer to 0.5. And so you get the Dunning-Kruger effect, or something very like it: people with low (true) scores tend to “overestimate” their scores, while people with high (true) scores tend to “underestimate” them.

        Here’s some R code to demonstrate the effect, taken from Cosma Shalizi’s Pinboard bookmark for that post (which is where I learned about it):

        n <- 1000
        s <- 1
        actual.raw <- rnorm(n)
        perceived.raw <- actual.raw+rnorm(n,sd=s)

        buckets <- cut(actual.raw,
        breaks=quantile(actual.raw,
        probs=c(0:4)/4))

        perceived <- 100*ecdf(perceived.raw)(perceived.raw)
        actual <- 100*ecdf(actual.raw)(actual.raw)

        plot(x=actual, y=perceived, cex=0.1)
        points(y=aggregate(perceived~buckets,
        FUN=mean)[,2],
        x=aggregate(actual~buckets,
        FUN=mean)[,2],
        pch=16, col="blue")
        abline(0,1, col="grey")
        abline(lm(perceived~actual),col="blue")

Leave a Comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.