Some thoughts on The Undoing Project, especially related to science, academia, and mentoring

I recently finished Michael Lewis’s The Undoing Project, which focuses on the lives and work of psychologists Danny Kahneman and Amos Tversky. They changed how we think about how we think, with their work on psychology having major influences in economics and medicine, in particular. I really enjoyed the book, and there were a few points I wanted to write about here, as I think they are important for scientists, mentors, and/or academics to consider. It’s not a full review of the book* – I’m just focusing in on a few areas that I thought were particularly notable.

undoingprojectimage

Variation in and between samples & how we interpret it
One major theme of Amos and Danny’s work is that humans are not nearly as rational as we think we are. (I’m referring to them by their first names because this is what Lewis does throughout the book.) This includes studies that they did on academic statisticians, who routinely made basic errors when asked about different scenarios. One important one is that

even statisticians tended to leap to conclusions from inconclusively small amounts of evidence. They did this, Amos and Danny argued, because they believed – even if they did not acknowledge the belief – that any given sample of a large population was more representative of that population than it actually was. – page 159

This is what leads people to believe that, if a coin is flipped and there are several heads in a row, that a tail is more likely on the next flip, rather than having the same likelihood it always does: 50 percent.

Even people trained in statistics and probability theory failed to intuit how much more variable a small sample could be than the general population – and that the smaller the sample, the lower the likelihood that it would mirror the broader population. They assumed the sample would correct itself until it mirrored the population from which it was drawn. – page 160

Amos and Danny called this “the law of small numbers”, which was that people believed that small sample sizes would have the same properties that large samples do.

One of the things that was the most striking to me was a question they posed to their fellow academic psychologists: what should they recommend to a student who collects one sample and finds X and then collects a second sample and finds not X? For the most part, psychologists didn’t say they would recommend that the student increase sample sizes or think critically about their theory. Instead, they said they would recommend that the student should try to find a reason for the difference between the two groups. That is: they ignored that the two samples would possibly differ from one another just because they are small samples drawn from the same larger population, and, instead, assumed that the underlying populations are different. I suspect many of us would make the same mistake. If I sample one lake and find X and sample another lake and find Y, my first instinct would be to wonder what was different about the two lakes. But it’s entirely possible — especially if the sample sizes are small — that the lakes are identical, and it’s just that I didn’t properly characterize the population with my sample.

To put it succinctly, we “tend to extract more certainty from the data than the data, in fact, contain”. – page 162 (emphasis mine)

I felt like this was such an important thing to consider that I assigned this section of the book for lab meeting one week this summer.

The dangers of getting attached to a single explanation

In the course of our personal and professional lives, we often run into situations that appear puzzling at first blush…Typically, however, within a very short time we come up with an explanation, a hypothesis, or an interpretation of the facts that renders them understandable, coherent, or natural. The same phenomenon is observed in perception. People are very good at detecting patterns and trends even in random data. In contrast to our skill in inventing scenarios, explanations, and interpretations, our ability to assess their likelihood, or to evaluate them critically, is grossly inadequate. Once we have adopted a particular hypothesis or interpretation, we grossly exaggerate the likelihood of that hypothesis, and find it very difficult to see things any other way. (pages 205-206, quote from a talk given by Amos, emphasis mine)**

This is definitely something I’ve worried about: that we can easily start to believe just so stories because they have a compelling narrative, rather than because they are actually true. This was a good reminder of the importance of having multiple working hypotheses, and of being critical of one’s own evidence and narratives. 

Collaborations
One really striking thing is how close and intense the collaboration between Amos and Danny was. It was basically a marriage. There were clearly highs – the book notes how they used to lock themselves in a room with just each other, and all people outside the room could hear was constant laughter as they worked. They would sit next to each other as they wrote each individual sentence of a manuscript, which is something that I can’t imagine doing with a collaborator! Here’s a description from the book:

Their offices were tiny, so they worked in a small seminar room. Amos didn’t know how to type, and Danny didn’t particularly want to, so they sat with notepads. They went over each sentence time and again and wrote, at most, a paragraph or two each day. – page 158

In the end, they could never tell who had come up with which idea.

At least, that was true while they were in the same place. Eventually, they were in different places (I’m not sure if they actually moved more than a typical academic, but, given how hard I found moving, how much they moved stood out to me) and their collaboration started to fall apart. Around that time, they were interviewed by someone who was interested in power (professional) couples. During that interview, Amos said:

The credit business is very hard. There is a lot of wear and tear, and the outside world isn’t helpful to collaborations. – page 294

Notably, the person who interviewed them, a Harvard psychiatrist named Miles Shore, began the project because of disagreement over whether to promote someone at his institution who was clearly doing important work, but all of it in collaboration with another person. Clearly collaborations have become the norm in academia, but it’s still true that this issue of trying to partition credit can end up being problematic. It’s part of why I’ve been interested in the question of what is signified by last and corresponding authorship of a paper. The book also notes that awards ended up causing additional tension: Amos won a MacArthur genius grant while Danny did not; as it’s put in this New Yorker piece about the book:

When the MacArthur grants are awarded every year, only the most egomaniacal of us read the list and say, “Damn, I lost.” Unless, that is, your best friend wins the prize for work you did entirely together.

But back to the specific subject of Amos and Danny’s collaboration: they had very different personalities – Amos was brash and outgoing, while Danny is much more self-doubting. Because they were still able to work together and respect and trust each other’s ideas and insights, Danny’s doubts and Amos’s confidence were able to balance each other, leading them to do world-changing science.

Praise, criticism, and regression to the mean
Danny and Amos are (or were, in the case of Amos) both Israeli, and one recurring theme of the book is their time fighting for Israel and/or working with the Israeli army. Early in his career, Danny had worked with the Israeli army to try to change how they selected, assigned, and trained incoming recruits. During that time, he found that the instructors believed that harshly criticizing a pilot who had a bad flight was the best way to improve their performance.*** The experience was that, if someone had a bad flight and was harshly criticized for it, he usually did better the next time. If he had a good flight and was praised for it, he usually did worse the next time. The higher ups in the military had taken this to mean that harsh criticism caused people to improve and praise caused people to get worse. Danny pointed out that it was really just regression to the mean: if someone had an exceptionally good flight, they probably weren’t going to have a second exceptionally good flight right after it. The same holds for an exceptionally bad flight. It was really just regression to the mean, not the impact of what was said, but it led to a culture where there was lots of criticism and not a lot of praise. It seems to me that this same phenomemon applies pretty broadly. To quote the book:

We are exposed to a lifetime schedule in which we are most often rewarded for punishing others, and punished for rewarding. – page 203

The importance of downtime

The secret to doing good research is always to be a little underemployed. You waste years by not being able to waste hours. – Amos Tversky, page 230

This is actually the reason I got the book, after learning about this quote from Andrew Read. I thought this was going to be a major theme of the book, but it wasn’t. Still, I think this a nice, pithy way of framing the idea that we need time to step back from our work and let things stew a bit. This is one of the benefits of running for me.

Gut feelings

It was troubling to consider, he began, ‘an organism equipped with an affective and hormonal system not much different from that of the jungle rat being given the ability to destroy every living thing by pushing a few buttons.’ Given the work on human judgment that he and Amos had just finished, he found it further troubling to think that ‘crucial decisions are made, today as thousands of years ago, in terms of the intuitive guesses and preferences of a few men in positions of authority.’ The failure of decision makers to grapple with the inner workings of their own minds, and their desire to indulge in their gut feelings, made it ‘quite likely that the fate of entire societies may be sealed by a series of avoidable mistakes committed by their leaders.’ – page 247, talking about a talk that Danny Kahneman gave

I literally sighed as I finished typing that quote. It applies to so much that is going on today, including decisions related to climate change.

Science communication

No one ever made a decision because of a number. They need a story. – Danny Kahneman, page 250

This is a particularly succinct way of framing much of the advice related to science communication! He then went on to say,

the understanding of numbers is so weak that they don’t communicate anything. Everyone feels that those probabilities are not real – that they are just something on somebody’s mind.

Science & theory

Science is a conversation and you have to compete for the right to be heard. And the competition has its rules. And the rules, oddly enough, are that you are tested on formal theory. – Danny Kahneman, page 287

This relates to the part above about collaboration. As Lewis puts it in the book, “Danny’s interest ended with the psychological insights; Amos was obsessed with the business of using the insights to create a structure. What Amos saw, perhaps more clearly than Danny, was that the only way to force the world to grapple with their insights into human nature was to embed them in a theory. That theory needed to explain and predict behavior better than existing theory, but it also needed to be expressed in symbolic logic.” Danny is also quoted as saying

What made the theory important and what made it viable were completely different – Danny Kahneman, page 287

I think that, because they were focusing a lot on things with implications for economics, having a formal, mathematical theory was probably particularly important, but I think the same general sentiments can apply in ecology and evolution.

Conclusions
As I said at the beginning, my goal wasn’t to provide a complete book review, but, rather, to focus in on a few things that I found particularly interesting and thought might be interesting to readers of the blog. There were lots of other interesting things in the book, too (including descriptions of Danny’s escape from Nazi Europe as a child and of life in Israel in the 60s and 70s). Overall, I really enjoyed it and found it gave me lots of food for thought.

 

* I seem to be unable to write a full book review. I was planning on reviewing Stephen Heard’s excellent book on writing, but haven’t managed to get it written. So, I’ll just say here that I really enjoyed Steve’s book, and bought a second copy for my lab.

** In this section of the book, they focus on historians. Research showed that, if you asked people to predict what the outcome of Nixon’s visit to China and Russia might be, they had a hard time predicting with certainty. But, after the trip, when they asked people to recall the odds they had given to different possibilities, their recollections were off, with them all believing they had been more certain of the eventual outcome than they actually had been.

*** He also found that the stereotypes about what makes someone good in the infantry or in a tank or whatever were not supported by evidence. He was able to find things that made someone likely to succeed in the military, but they were equally likely to succeed in any job in the military. This went against the prevailing wisdom, and was hard for some to accept.

24 thoughts on “Some thoughts on The Undoing Project, especially related to science, academia, and mentoring

  1. Another reason to read this book: Michael Lewis is an utterly compelling writer. One of the best in the non-fiction genre. The storytelling in the “regress to the mean” account not only is entertaining, it cemented the concept firmly in my brain for the first time. Now, if I can only *teach* it competently.

  2. Great stuff Meghan.

    I’m finding that popular books, articles, and blog posts about cognitive biases and the replication crisis are a great source of material for my intro biostats course. Thanks for the additional fodder. (though I already ask my students about regression to the mean, and about what they should or shouldn’t infer if two random samples from the same population give different results).

    I’m now wondering if it would be fun sometime for me to post some of my practice biostats exam questions on this material as poll questions on Dynamic Ecology. They’re fun multiple choice questions (at least, I think so!), and I’d be curious to see what fraction of readers get them right.

    As a sports fan, I recommend Lewis’ Moneyball, about Billy Beane’s Oakland A’s using statistics to identify undervalued baseball players. Even though the book is outdated now in that every team now relies heavily on data analytics, so players are pretty much correctly (and equally) valued by every team now, given the available information. I recall that the movie came up in our comment thread on best movies featuring scientists on the grounds that Beane approached player evaluation like a scientist would have.

    • I would think having your exam questions would be fun!

      The Undoing Project starts out with how this project relates to Moneyball and sports analytics, though it focuses on basketball instead of baseball.

  3. Have you read “Thinking, Fast and Slow”? It’s Kahneman’s version of things, and also a fantastic read. I was struck by many of the exact same points you list here (wondering if it’s worth reading Lewis’ book). Another point of potential relevance to ecologists, which I think is attributed mostly to other researchers (Kahneman’s book doesn’t focus only on his own work), is the idea that “Bad is stronger than good”. We put far more weight on loss than gain, for example, with a rough rule of thumb being 2:1. Winning $100 is good (let’s say 100 happy points); losing $100 is awful (a loss of 200 happy points). I wonder whether this applies to how ecologists / conservationists view change in communities and ecosystems. If CO2 and warming increase forest productivity in some place, and exotic insect pests reduce it by the same amount, psychologically the negative outweighs the positive, so we tell a doom-and-gloom story anyway (nod to the “need a story” quote!).

    • I haven’t read “Thinking, Fast and Slow”. I really liked The Undoing Project, so it might be worth a read (unless it’s too repetitive). Via twitter, Hao Ye pointed me to this reading list:
      https://jasoncollins.org/economics-and-evolutionary-biology-reading-list/
      and particularly recommended Gigerenzer.

      I also thought it was interesting how people reacted differently when they “flipped the signs”, as Lewis puts it in his account. Lewis put various scenarios that Kahneman and Tversky put to people in the book and it was fun to think about them.

    • I believe you have hit on something there with the ‘doom and gloom’ mentality in ecology, generally (not that there isn’t justification for that attitude, mind you). I was reminded of that with Ehrlich’s recent publication on global ruin. One of the first adult books I read as a child was Ehrlich’s ‘Population Bomb’. It turned me on to a whole genre of doom & gloom ecology at the time, and was responsible for my becoming an ecologist.

      But it has given me pause, because I think the overall failure of ecology and environmentalism to stem the tide of global ruin is rooted in the doom & gloom mentality. The average Joe doesn’t want to hear it and I think just tunes it out. Medicine, on the other hand, which deals with doom & gloom all of the time- never focuses on the doom & gloom. Instead, they sell hope- and look how that has turned out! Overall I have felt ecology and related disciplines desperately need a sales & marketing department…

    • Indeed I just started reading “Thinking, Fast and Slow” after hearing a wildlife ecologist (Mike Mitchell) mention it in a Wildlife Society talk on “reliable inference” in wildlife biology, which as this blog has mentioned before (rightfully so) is somewhat prone to “statistical machismo.” Sounds like I’ll have to check out “The Undoing Project,” too!

  4. @Meghan and Mark:

    Do Danny and Amos and Michael Lewis talk about these cognitive biases more as irrational flaws in our thinking, or as adaptive heuristics that often serve us well but that aren’t infallible? It’s my impression from reading Gelman’s blog that psychologists span a spectrum on this.

    • Kahneman seems balanced in his take on things: these are manifestations of evolved, adaptive behaviours, that can lead us astray sometimes. And perhaps the modern world has created lots of “ecological/evolutionary traps” for Homo sapiens. So, is it sometimes, frequently, a lot of the time? Seems open to a dead-end argument of that nature…

    • I wonder if “cognitive bias” is a somewhat misleading term that refers to a special case of a very common and beneficial behavior: namely, the fact that, once we learn (or think we’ve learned) something, we stick to it until the counter evidence is overwhelming.

      I pointed out in Brian’s climate thread that humans are adapted to social groups in which people frequently lie or misrepresent things (intentionally or unintentionally). The built-in mechanism we have to deal with that is to stick with what we know or think we know until the consequences of doing so become so severe that a change is forced.

      And in the special case of scientists and scientific evidence, it happens with reasonable frequency that newer ideas that appear at first glance to be superior matches to the evidence eventually break down under intense scrutiny. It takes years and even decades for science to be thoroughly vetted. So the fact that someone sticks to their original ideas isn’t always bad, sometimes they prove correct later on.

      Last but not least, this seems like another example of situations where the negative outcome – sticking to ideas when their wrong – is deemed worse than the positive outcome – sticking to ideas that, after a period of intense scrutiny, turn out to be right after all.

  5. I now recall an interesting link from an old linkfest on how Kahneman himself fell victim to some of the cognitive biases he identifies. This was in the early days of psychology’s “replication crisis”. If memory serves, Kahneman was slow to appreciate the importance of what Andrew Gelman calls “the garden of forking paths” and overweighted the strength of evidence provided by some small sample sized-studies. We linked to a remarkable public recantation he made recently. Statistical thinking is hard!

  6. Excellent post, Meghan! The authors hit upon what I believe to be perhaps the most fundamentally important aspect of science… and yet, sadly, one that is very often overlooked or ignored.

    “Amos and Danny called this “the law of small numbers”, which was that people believed that small sample sizes would have the same properties that large samples do.”

    They are right about this, but I would go a step further and assert far too many scientists (often ecologists) fail to understand the relationship of the sample to the population, and what, if anything we can interpret about a population from our samples. There is so often an arbitrary decision made when it comes to sampling regimens, and even when it is not arbitrary, budgets can preclude any ability to implement a sufficient sampling protocol.

    I believe one of the best treatments of the topic is provided in: Elzinga C.L., Salzer D.W., Willoughby J.W. & Gibbs J.P. 2001. Monitoring plant and animal populations. Blackwell Science, Malden, MA, 360 p. The authors provide what I believe to be not only an exhaustive treatment on sampling and estimation of population parameters, but do so in such an eloquent manner that most non-statisticians can fully grasp the concepts. I highly recommend it to anyone doing ecology.

    • Cheers for this, good read.

      It is true that Kahneman’s confidence in social priming looks bad a few years into the replication crisis. His line about how we simply have no option but to believe those priming experiments is probably going to haunt him to his grave. But as I noted above, it’s to his credit that he’s since changed his mind. And the fact that even *the* Dan Kahneman could turn out to be so wrong maybe reinforces some of his points.

      As someone who’s writing a book himself, it’s sobering to see that even a book as great as Kahneman’s is starting to show some cracks even just a few years after it was written. It’s just so hard to write something of lasting value (and to be clear, it might well be that large chunks of Thinking, Fast and Slow will continue to hold value for a good while yet).

      • Unfortunately almost every science book will eventually be useless. Sorry, J! 🙂 I took a course on the paleoecology of carbonate reefs in grad school. After reading this blog for four years, I’m glad I sold the book because nothing in it resembles modern ecology.

        The one thing you *can* do that could really last is provide a structure and organization for thinking about the content that can outlast the content itself. Dana’s Mineralogy is in the 23rd edition and almost 170 years old with it’s fourth set of authors. That’s something to shoot for!

        good luck!

  7. Hi Meghan, the advice about science communication needing a story rather than numbers is one that bothers me. I realize it’s the prevailing wisdom but I’m not sure that we should let the problems with human psychology decide what we use as evidence. Stories will almost inevitably be cherry-picked – nobody should be able to be dissuaded from a conviction based on a set of relevant and accurate data because somebody tells them a compelling story. It’s what allows people to be convinced by a senator who brings a snowball onto the floor of the US Senate at the end of February to show that, in fact, the planet isn’t getting warmer. It seems to me that when we accept that story-telling is the best way to get science across to people we agree to communicate science in a way that is, and perhaps this is too extreme, the antithesis of science. But maybe it’s not too extreme…Brian has made the argument that what sets science part from non-science is that we count things (Brian, I haven’t gone back to re-read what you wrote about this so I may be getting your position wrong). If what sets us apart is that we’re quantitative then our evidence should usually be about numbers.
    And as an aside, Meghan, the example you give of people guessing tails after a series of heads doesn’t seem to me to be a ‘reaching an inference based on a small sample’ problem. If you got a series of 5 heads in a row and then inferred that the next flip would be a head that would be the small sample problem, wouldn’t it? If you guess tails, it’s more a problem of not believing that independent samples are truly independent (I think).

    Jeff

    • “No one ever made a decision because of a number. They need a story. – Danny Kahneman, page 250”

      I’m not certain the author used the term ‘story’ within the context you’ve presented, Jeff. My take on what he meant was that numbers in of themselves will not suffice in communicating science, at least most of the time. If 100 men jump out of a plane with no parachutes and they all die, then simply reporting the “number” of dead likely communicates the “story”: Either don’t jump, or if you do, get a parachute.

      But such a simple story communicated by a number is the exception, not the rule. I was fortunate to have also majored in English as an undergraduate, and I had one mentor in particular that sacrificed oodles of time over several years to make me into a good writer. Since then, I’ve been a really good writer. Being a really good writer really means one thing at the end of the day: You are able to present a compelling story.

      I’ve seen many people do great science but then fall short of their potential because they were, at best, shabby writers. They were not able to translate the numbers into a compelling story, and because of that, most people tuned out their great science.

  8. Pingback: We've come to expect science to solve problems—and that's a problem — Quartz

  9. Pingback: We've come to expect science to solve problems—and that's a problem - Tech Freaked

  10. Pingback: Happy 6th birthday to us! | Dynamic Ecology

  11. Not directly related to the post, but I was interested to see that Dan Kahneman has declared behavioral priming–all of it–“dead”. See here: https://www.edge.org/adversarial-collaboration-daniel-kahneman

    That’s a complete reversal for him–in the times before (and early days of) the replication crisis in social psychology, Dan Kahneman was maybe the world’s most prominent advocate of the ubiquity, strength, and importance of priming.

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.