Book review: Superforecasting by Philip Tetlock and Dan Gardner

I recently read Superforecasting: The Art and Science of Prediction by Philip Tetlock and Dan Gardner. Here’s my review.

tl;dr: It’s good, and will get you thinking about how its conclusions apply to your own scientific work.

Philip Tetlock studies decision-making by individuals and organizations. His co-author Dan Gardner is a noted journalist and popular science writer. The book is written in the first person, as if Tetlock were the sole author, so I assume Gardner was brought on board to polish the writing. Superforecasting is a popular account of the results of Tetlock’s decades of work organizing forecasting tournaments. Hundreds of participants were asked to predict the answers to hundreds of questions about future events. Those events were mostly geopolitical events with time horizons ranging from a few weeks to a few years. Sample questions include “Will the number of registered Syrian refugees reported to the UN Refugee Agency as of April 1, 2014 be under 2.6 million?”, posed early in Jan. 2014, and “Will there be an attack carried out by Islamic militants in [list of several European countries] between 21 Jan. and 31 Mar. 2015?”, posed shortly after the Charlie Hebdo attack on 7 Jan. 2015. Participants could come up with predictions using whatever methods and information they wanted, and were free to update their predictions as often as they wished (e.g., as they discovered new information). In some tournaments they were assigned to work in teams. In one tournament, the competitors included expert US intelligence analysts whose job it is to predict geopolitical events. Tetlock studied how accurate participants’ predictions were as a function of all sorts of factors: how often and how much participants changed their predictions, how much news they consume and from what sources, their education level, and on and on. The three take-home messages are:

  • Most forecasters, including US intelligence analysts, are no better than dart-throwing chimps.
  • A small minority of forecasters, termed “superforecasters”, do much better than one would expect by chance. (aside: Tetlock is well aware of the statistical issues here and discusses them at length. He convinced me that the superforecasters really do have considerable forecasting skill. They’re not analogous to the lucky winners of a lottery or a coin-flipping contest.)
  • Superforecasters’ methods run to type. There’s no foolproof recipe to making good forecasts. But there are strategies that anyone can follow if they’re prepared to put in the effort. For instance, one generally useful strategy is “Fermi-izing”. Break the question down into smaller sub-questions to which you can more easily attach probabilities. Much of the book is taken up with elaboration of these strategies.

It’s a good book. It’s well-written and clear, and does a nice job of balancing general points with illustrative examples. I really liked how thoughtfully Tetlock discusses all the background issues that one has to think about carefully before one can study this topic. For instance, what sort of questions should be asked? The questions can’t be too easy or too hard. Anyone can predict whether the sun will rise tomorrow, but no one can predict who will win the US Presidential election a century from now. And how do you put numbers on predictive success? I also enjoyed how Tetlock makes some of the superforecasters into the heroes of the book. He emphasizes their diversity–how they’re just regular people from all backgrounds and walks of life, who share the virtues of being intelligent, curious, up for a challenge, and intellectually humble.

Superforecasting is an interesting companion piece to Nate Silver’s The Signal and the Noise, which I reviewed here. They draw many of the same lessons, especially:

  • Make lots of forecasts and check if they were correct
  • A good way to make forecasts is to consider what happened in relevantly-similar historical cases.
  • Expressing your forecasts as probabilities is helpful, even if those probabilities are just guesstimates that don’t describe any well-defined data-generating process.
  • People with strong subjective priors suck at forecasting. Good forecasters are “foxes”, not “hedgehogs”.

The main differences between the books are that Silver focuses on a much wider range of questions (everything from forecasting the weather, to elections, to earthquakes), and that Silver focuses mostly (not entirely) on cases in which one can base forecasts on statistical or mathematical models as opposed to more informal quantitative reasoning.

The biggest gap in the book for me was lack of data showing that forecasters do sometimes improve. Showing that superforecasters tend to operate in a certain way, in which in principle anyone could operate, is not the same as showing that others actually do start to operate that way. Put another way: what does it take to turn somebody who’s not currently a superforecaster into a superforecaster? Knowing what superforecasting involves is not the same thing as knowing how to turn people into superforecasters, whether via training them or incentivizing them. Just as I can’t turn the students who struggle in my classes into non-struggling students just by telling them about the study habits of the non-struggling students.

As a scientist, I also found myself wanting to see plots of data from the forecasting tournaments (e.g., the distribution of participants’ Brier scores). But that’s probably too much to expect for a popular book.

A few other random thoughts:

  • In a scientific context, I share Tetlock’s frustration with debates that seem to roll on without resolution or even discernable progress, because the key questions and terms are too vaguely-defined (but see). At best, opposing sides talk past one another, sometimes without even realizing they’re doing so. See here for an ecological example. At worst, everybody has plenty of wiggle room to cherry pick definitions and evidence however they want so as to never have to give up on their pet hypothesis. See here and here for an ecological example.
  • Unfortunately, it’s not clear how to create the conditions under which faster progress is possible. For instance, at one point near the end Tetlock calls for more “adversarial collaborations” between intellectual opponents. Aided by third parties they both trust, they should agree on precise, testable predictions that would decide between their opposing views, and then collect the data testing those predictions. To which: yeah that’d be awesome, including in science. But for reasons Tetlock himself lays out it’s pretty much never gonna happen. See here and here for discussion.
  • Near the end Tetlock has a very interesting discussion of whether his results are unimportant because the questions tournament participants had to forecast aren’t ultimately the ones we care about. For instance, we don’t really care if North Korea will conduct a missile test in the next three months. What we really care about is “how it will all turn out.” Will North Korea eventually launch a nuclear strike? Or invade South Korea? What can be done to prevent such outcomes, and with what consequences? Tetlock argues that answering small, tractable questions helps you build up a better picture of the answers to the big, intractable questions you ultimately care about. I found myself thinking that this is how a lot of science works–or is supposed to work but perhaps doesn’t actually work. For instance, how good are we at operationalizing vague verbal concepts in ecology?
  • I really liked a remark from one of the superforecasters, poker pro Annie Duke, about how forecasting requires intellectual humility but not self-doubt. You can think highly of yourself and your abilities while still remaining humble about your own or anyone’s ability to understand and predict our massively-complicated world. This strikes me as a good attitude for any scientist to have.

4 thoughts on “Book review: Superforecasting by Philip Tetlock and Dan Gardner

  1. Sounds like an interesting book but more along the lines of The Wisdom of Crowds than of Nate Silvers book.

    I think alot of people make inferences or predictions about the future that shouldn’t be called “forecasts” – a term which IMO should be reserved for predictions that will be and/or have been tested and revised to improve accuracy.

    To make accurate forecasts, one must be able to a) recognize which variables are relevant and b) weight them appropriately. IMO most “predictions” don’t do either (a) or (b).

    IMO most predictions about foreign policy rely on a few anecdotal observations and a just-so-story built around them. There is no real effort to establish all potential relevant variables, let alone figure out there relative importance.

    A major issue w/ almost all world disaster scenarios is that their proponents frequently explicitly reject the idea of including future technological change, making their predictions useless from the beginning.

    • “I think alot of people make inferences or predictions about the future that shouldn’t be called “forecasts” – a term which IMO should be reserved for predictions that will be and/or have been tested and revised to improve accuracy. ”

      That’s what a lot of the book is about.

      “IMO most predictions about foreign policy rely on a few anecdotal observations and a just-so-story built around them. There is no real effort to establish all potential relevant variables, let alone figure out there relative importance.”

      Read the book, I think you’d like it. It is indeed the case that predictions about geopolitical events based on just-so stories are rubbish–but it’s also possible to do substantially better.

      • Yes, I’d like to read it – or rather listen to it if possible.

        Have you read The Wisdom of Crowds? In that book the author outlines methods of improving predictions for non-quantitative phenomena. It would be interesting to reread Wisdom of Crowds after reading this book and see where their methods overlap.

      • I haven’t read it, would be interested in doing so. I have read a bit of research on the conditions under which prediction markets work well and the conditions under which they don’t. And I’ve read the old classic Extraordinary Popular Delusions and the Madness of Crowds. Silver talks about prediction markets a bit in his book, IIRC. And Thaler incorporated a prediction market into one of his forecasting tournaments; the superforecasters beat it.

Leave a Comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s