Friday links: confirmation bias confirmed, peak reading, diatom art, and more (UPDATEDx2)

From Meg:

Scientists are reading less than they used to! Well, unless you take the median, in which case they’re reading the same amount as before, and you have to rewrite your Nature news piece. ht: Liz Neeley

Here’s a post written by a woman at Harvard, Mia You, about her frustration with the Harvard library policy of not allowing children into the library. The good news is that the official policy was revised within days of the post appearing.

From Brian:

In the sad but too true category: how to kill a paper when reviewing (the first comment containing a rebutttal letter to the editor is even better)  HT @calestous

From Jeremy:

A case study of the importance of best practice in experimental design: Studies of nestmate recognition in ants often compare intra- vs. inter-colony aggression, with the expectation that intra-colony aggression should be low or absent. And that’s indeed what most studies find–but in large part because of confirmation bias. A new meta-analysis shows that the minority of studies that have been conducted blind find much higher rates of intra-colony aggression, and much lower differences between intra- and inter-colony aggression. In other words, researchers observing behaviors that are difficult to classify tend to see what they expect to see, even if it’s not actually there. I’d be curious to know the range of reaction to this among the researchers doing this sort of work. How many are nodding their heads and going “Yup, that’s why we always use blinding,” how many are going “Wow, maybe I’d better start blinding”, how many are going “Well, this just shows what happens if you observe animal behavior without sufficient experience and training”, how many are going “Ok, but that just shows behavioral studies are hard, you can’t expect them all to be perfect,” and how many are going “Meh, even non-blind studies still get the direction of the effect right on average, so this is just nitpickers carping about trivialities”? (ht Florian Hartig, who has additional discussion, and suggests using this study as a teaching tool for undergaduate experimental design couses. A suggestion I plan to take up!) (UPATE : I had forgotten this: in an old post I asked if ecologists should do blinded data analyses! Apparently the answer is “yes”.)

A long-running survey (since 1977) reports that US scientists may have reached “peak reading”. Previously, the trend had been for scientists to report reading more papers than previously, but spend less time on each of them. That trend seems to have peaked. In any case, “power browsing”–skimming quickly, just looking to “get the gist” or some useful snippet of information–clearly is the new normal. Which is one big reason why post-publication “review” is mostly a non-starter. As I’ve said before, the only time people read like reviewers is when they’re acting as reviewers. There are exceptions (the arsenic life thing, for instance), but they’re just that–exceptions.

Terry McGlynn responds to my post on natural history vs. ecology. You should click through, he has a lot to say and as always his thoughts are worth reading and very readable. The very short version is that he thinks that academic training in ecology overemphasizes academic job skills and tasks (analyzing your data, writing your next paper, getting your next grant…) at the expense of skills that make no immediate, obvious contribution towards getting an academic job. And that “knowing the natural history of study systems other than one’s own” is one of those skills without obvious job market value. His broad point is an important that goes well beyond natural history, since knowing the natural history of study systems other than one’s own is only one example of skills, activities, or knowledge that have no immediate, obvious professional payoff (“blogging” is another, and there are many more…). The comment thread over there is good too–scroll down for a thoughtful and wonderfully-phrased comment from Dan Janzen.

You’re never too young to write a major synthesis paper. Indeed, arguably you can even be younger than the author of that post. In his famous “modest advice” to grad students, Steve Stearns suggests that a good thesis proposal should be publishable as a critical review paper.

A radical proposal (UPDATE: link fixed) to reform granting bodies: give every qualified scientist a block grant for the same amount, with the requirement that they give some fixed fraction of it away to another qualified scientist. Discuss! [grabs popcorn]

Diatoms, artistically arranged. (ht Ed Yong)

Videos of the talks from BAHFest, the festival of Bad Ad-hoc Hypotheses about evolution. The idea is to present interesting-sounding, seemingly well-grounded evolutionary hypotheses that nevertheless are obvious rubbish. A fun and interesting way to try to teach the general public (and evolutionary psychologists?) to be more discriminating. Maybe also a good teaching tool for classes on scientific methods and study design? I haven’t actually watched the videos yet, but I’m looking forward to checking out the one on adaptive infant aerodynamics. 🙂 (ht Ed Yong)

And finally, peanut butter and jelly…fish (ht Ed Yong)

6 thoughts on “Friday links: confirmation bias confirmed, peak reading, diatom art, and more (UPDATEDx2)

  1. Hey Jeremy– the radical proposal link isn’t working.

    I’ve read about that before. It’s a radical and really intriguing. Clearly the US funding system is under immense strain, making it a good time to really think about how funding should be allocated. However I don’t see the funding agencies ever adopting the idea in that report. But for fun:.
    Pros: Nothing makes you think about whose work you value most than giving money. I expect that two strategies of giving would dominate (assuming good conflict of interest monitoring): 1) give to the people doing the most exciting/interesting/highprofile work in your area. 2) Give to someone who is likely to provide something that helps your own research (software, data, etc). As such it could reward collaborative researchers or those who make valuable products other than scientific papers (software, data availability, etc). Junior people would get a guaranteed allotment of funding to get research products out the door.
    Cons: there’s clearly not enough to give more than a pitance to everyone who would like to be a NSF funded researcher, so clearly that pool would need to be restricted. It’s unclear to me how. Could also end up with the big names having crazy levels of support, without some sort of cap.

    • “Could also end up with the big names having crazy levels of support, without some sort of cap.”

      Could be, hard to say of course. But your comment raises a larger point not raised often enough about such radical proposals (whether to reform grant-giving or peer review or whatever): transient effects of initial conditions. Maybe in the very long run a world that operated under the radical reform would be a better world. But in the short run (and maybe even the quite-long-but-not-infinitely-long run!), there would be all sorts of carry-over effects from the world we currently live in.

      p.s. Link fixed, thanks.

      • @lowendtheory:

        Yeah, the mind reels at the ways at which this could be gamed. Even leaving aside out-and-out gaming, it’s easy to imagine all kinds of crazy things one might try to do in order to attract funding under such a system.

    • Another potential con is the effect of distributed funding decisions on the gender gap in science. A meta-analysis has shown that centralized funding decisions are biased in favor of men (1). Even if decentralized funding doesn’t exacerbate this bias (it might), I worry that identifying sources of bias and designing corrective policies may be limited by small sample sizes. For instance, if I gave my 50% to 2 women and 3 men over the course of a 5 year period, is my deviation from equality just noise or are my funding decisions biased? If the funding bias favoring men is 7% as reported by (1), funding agencies may not be able to identify scientists that are (explicitly or implicitly) biased until they near retirement.

      Second, a discussion of whether or not to impose caps on funding would be better informed by data showing the shape of the distribution of outcomes (e.g. # citations) per dollar invested. Certainly this might vary across disciplines and even sub-disciplines. Case in point, my biggest single expense last field season was a 2000-unit case of tent stakes for a plant demography study, a laughably small cost from the perspective of my fellow grad students doing molecular work. Expecting that our per-dollar productivity can be meaningfully measured on the same scale seems dubious.

      Finally, I see interesting parallels between the governance of peer review and that of nations/collections of nations. Any “solution” is likely to be dynamic in time and warrants continuing evaluation and dialogue.

      (1) Bornmann, L., R. Mutz, and H.-D. Daniel. 2007. Gender differences in grant peer review: A meta-analysis. Journal of Informetrics 1:226–238. http://dx.doi.org/10.1016/j.joi.2007.03.001

  2. I’ve heard of people writing their names with diatoms, though I can’t recall enough details to be having any luck with google. I believe it was related to membership in societies of diatom enthusiasts. Has anyone else heard of this?

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.