Friday links: do people really object to experiments, decline effect redux, Darwin vs. trees, and more (UPDATED)

Also this week: the latest on Plan S, Gresham’s Law vs. simulated pandas, fractals vs. David Foster Wallace, and more.

From Jeremy:

Here’s Greg Wilson’s handy new open access paper on 10 tips for teaching programming. (ht a colleague)

Data Colada critiques a recent PNAS paper finding that people have a general aversion to policy-oriented experiments. I linked to the paper when it first came out, so I wanted to link to this critique as well.

An interview with Daniella Rabaiotti, who has written three popular science books while doing an ecology Ph.D.

The latest on Plan S and the state of open access publishing more broadly. Momentum behind Plan S seems to be stalling.

Insights into the economics of open access publishing from the publication choices of, and open access fees charged to, Gates Foundation-funded researchers. One tidbit that slightly surprised me: fully OA journals from for-profit publishers and non-profit publishers charge similar fees on average. (though it’s not clear if this result would hold for a broader sample of researchers, submitting to a broader range of OA journals)

I don’t know anything about the psychological idea of ego depletion, which has been tested in many, many experiments. But this graph showing a steady linear decline in the effect size of ego depletion studies over time, from massive effect sizes in the late ’90s to basically zero today, is striking. The creators of the linked graph argue that study design has gradually improved over time, leading to less upwardly-biased estimates of effect size. Casual googling turns up various other cases of declines in estimated effect sizes over time in various social science fields (e.g., Gong & Jiao 2019, Rodgaard et al. 2019, but see Stephens 2016 for examples of increasing effect sizes over time). Anyway, all this had me thinking back to all that discussion of the “decline effect” a few years ago. I wonder what you’d find if you systematically went through a bunch of old ecological meta-analyses and plotted the effect sizes of individual studies vs. the years the studies were published? I mean, maybe you wouldn’t find anything interesting, but it wouldn’t be that hard to check. I’m sufficiently curious about this that I think I’ll do it… (UPDATE: in the comments, Tim Parker points us to a 2002 study of the decline effect in ecological meta-analyses. 2002 was a while ago, and many more meta-analyses have been published in EEB since then. Seems like this might be worth revisiting. Our commenters are the best!)

Did you know that Charles Darwin’s notebooks contain other sketches of evolutionary trees besides the iconic one? And that he never called any of them a “tree”? Indeed, next to one early sketch he wrote “tree not a good simile – endless piece of sea weed dividing”.

Any connection between fractals and, um, David Foster Wallace novels sounds pretty vague and superficial to me, but this post nevertheless contains the best brief explanation of fractal dimensions I’ve ever seen. I assume it’s a standard explanation that I just happened across for the first time, but still, it’s great. Before I only had the general idea of what fractal dimensions are but had no idea how they were calculated or how to interpret them. Now I do!

As someone who grew up in a family of small town grocers, and who thinks that ideals aren’t always best pursued by, well, idealism, I really enjoyed this.

And finally, this seems like as good a summary of 2019 as any: “grinding out millions of [warthogs] is the only hope you’ve got“. Hukuna matata!/it means no worries pandas! (ht Matt Levine, who notes that this is a hilarious example of Gresham’s Law.)

11 thoughts on “Friday links: do people really object to experiments, decline effect redux, Darwin vs. trees, and more (UPDATED)

  1. To me the fact that Plan S (or at least one if its leaders) is considering a regional firewall around its “open access” publications completely undermines any notion that the driving force behind OA is a moral one. Indeed, to me, it is starting to feel like it is more about winning the fight than any specific guiding principle(s) at this point.

    And I love that Darwin was so botanically/anatomically accurate they he preferred a repeatedly bifurcating seaweed over a tree as a better metaphor for a phylogeny (could have gone with a lycopod as well). That is one heck of a natural historian.

    • “And I love that Darwin was so botanically/anatomically accurate they he preferred a repeatedly bifurcating seaweed over a tree as a better metaphor for a phylogeny”

      Ah, but which is a better metaphor depends on your theory of speciation, does it not? 🙂 (kidding mostly, you make a very good point)

      • Well, a comb phylogeny might be a good fit to a tree (central trunk with meristem sending off shoots), but not your average phylogeny 🙂

    • It’s probably not right to conflate “Plan S” specifically with “Open Access” generically.

      I’ve always been partial to the suggestion that Plan S was at least partially about Europe capturing the publishing environment by making it expensive to publish, but I’m probably highly biased coming from a field that’s ~100% open access, but in the “Green” style that allows you to participate even if you’re penniless, which would/will be badly hosed by Plan S.

      • You’re right of course that Plan S not equal to whole OA movement.

        Never thought about that specific motive for Plan S but it is at least plausible (and consistent with the recent trial balloon about a firewall around Europe). And it nicely highlights that there are a lot of diverse agendas behind the OA movement.

  2. Coincidentally, over at Slate Star Codex Scott Alexander’s latest post notes that estimated effectiveness of all forms of psychotherapy tend to decline over time. He suggests it’s because the initial estimates get made by “true believers”.

    Which has me wondering: has anyone looked for decline effects in the pedagogical literature. Has the estimated effect size of this or that active learning technique declined over time?

  3. There is some existing evidence regarding the decline effect in ecology and related fields. Way back in 2002, Jennions and Moller found evidence of a small decline in effect size over time by examining a number of existing meta-analyses:
    Jennions and Møller. 2002. Relationships fade with time: a meta-analysis of temporal trends in publication in ecology and evolution. Proceedings of the Royal Society B.
    https://royalsocietypublishing.org/doi/10.1098/rspb.2001.1832

    Some other meta-analyses have explicitly looked for the decline effect. Here is a striking recent example of an apparent decline to zero!:
    Alfredo Sánchez-Tójar et al. 2018. Meta-analysis challenges a textbook example of status signalling and demonstrates publication bias. eLife 7: e37385. 10.7554/eLife.37385. https://elifesciences.org/articles/37385

    Tracking these examples down in the literature is tough because ecologists don’t tend to use the term “decline effect.”
    I can’t recall any papers gathering evidence of the decline effect from across ecology studies more recently than 2002, but don’t take that to mean that no one has done it. Many meta-analyses have been published since 2002, and it would certain be interesting to know how many of those data sets show signs of a decline effect.

    • Thanks for the pointers! Super-helpful.

      So far in my skim, I’ve only found one recent ecological meta-analysis that looked for a decline effect, This sometimes is called a “cumulative” meta-analysis–you study how your estimated weighted mean effect size changes over time as the cumulative number of studies grows.

      My offhand impression from *very* casual plotting of the data from about 15 recent meta-analyses in EEB is that estimated mean effect sizes don’t typically change much over time. But the among-study *variance* in effect size often seems to grow over time. I emphasize that’s a *very* tentative impression that could well change as I look at more studies and do proper corrections for non-independence of multiple effect size estimates from the same paper.

  4. I’d think a decline effect is simply regression to the mean (or also called reversion to mediocrity depending on whose basic stats book you used).

Leave a Comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.