One standard line about the importance of prediction and forecasting in science is that you can learn from failed predictions. Figure out why the prediction failed, and you can make better predictions in future (or else explain why better predictions are impossible).
There certainly are examples of that process working well–think of weather forecasting or hurricane track forecasting. But then consider this case, in which professional forecasters–people with money on the line–have consistently gotten their predictions of future 10-year Treasury bill yields wrong in the same direction, for twenty years. Highly trained professionals who will make a lot of money if they predict future T-bill yields just keep predicting that yields are going to spike in the near future, and they just keep being wrong. They’re liked stopped clocks that aren’t even right twice a day.
As another example, consider how everyone who has tried to forecast future costs of solar power has consistently missed high, year after year after year. “Everyone” includes individuals and groups openly advocating for solar power, opponents of solar power, and neutral government agencies.
Speaking as a scientist who knows a bit about forecasting, these seem to me like some of the worst forecasting failures in history. What can we learn from them? What’s going on in such cases? Apparently it’s not “political biases” or “lack of strong incentives to make correct forecasts”. Nor can it be “some time series are intrinsically hard to predict”. This isn’t earthquake forecasting. It’s not as if we’re trying to predict rare events here. And it’s not that the linked time series are too short, or chaotic, or just comprised of a random sequence of independent observations, or contaminated with massive sampling or measurement error. Heck, solar power costs are pretty well fit by a decreasing exponential function! From a permutation entropy perspective, they’re about as intrinsically predictable as a time series can possibly be! My undergrad biostats students could’ve just naively extrapolated a simple exponential function fit to the data, and beaten every professional power industry analyst! And it can’t be that “things changed”, because the whole reason these forecasts were consistently wrong in the same direction is that things didn’t change. People kept predicting they would change–T-bill yields would quit declining and spike, solar power costs would quit dropping so fast. And things kept not changing, over and over and over again. Apparently, forecasting isn’t hard only when naive trend extrapolation fails. It’s also hard when naive trend extrapolation works! Finally, it can’t be “reluctance to extrapolate short-term trends”, because (i) these are long-term trends, and (ii) people usually are happy to extrapolate from short-term trends. Indeed, forgetting about mean reversion, and so being too quick to extrapolate from short-term trends, is one of the most common forecasting errors there is! So if people are too quick to extrapolate trends in so many other contexts, how come they’re too slow to extrapolate in the very contexts where extrapolation would be a good idea (or at least, a clear improvement on whatever other forecasting method people are using)?
So what is going on here, if it’s not any of the usual sources of forecasting errors? Here’s a tentative hypothesis: nobody will ever adopt naive trend extrapolation as a forecasting method, because they know it has to fail at some point in the future. Anything that can’t go on forever will stop, as Herbert Simon said. The trouble is, knowing that an observed trend has to stop or reverse at some undetermined point in the future doesn’t mean it can’t continue just fine for an awfully long while, and doesn’t let you put a confidence interval on how long the trend will continue.* But if you have no idea when the trend will stop, you just stick with your model that keeps incorrectly predicting that the trend will stop soon. Because you know that at some point your model will be right (and you hope it’s soon). Whereas if you junk your model and just switch to naive trend extrapolation, you know that at some point you’ll be wrong (and you worry that it’s soon). Maybe people prefer knowing that they’ll eventually be right to knowing that they’ll eventually be wrong.
Another hypothesis, not mutually exclusive with the first: naive trend extrapolation doesn’t feel like forecasting at all, it feels like an expression of ignorance. “I have no idea why this trend is happening, so I predict it’ll continue.” Maybe nobody wants to confess ignorance. Even if your ignorant guess turns out to be right repeatedly, well, that maybe feels like you just made a series of lucky guesses.
Here’s a more depressing hypothesis, that I’m not sure is really a “hypothesis” so much as a just giving up on answering the question: people are just perverse. When attempting to forecast the future in contexts in which they should believe in mean reversion, they insist on spotting long-term trends that aren’t really there. When attempting to forecast the future in contexts in which they should believe in long-term trends, they insist on seeing signs of imminent mean reversion that aren’t really there. The exercise of coming up with an evolutionary just-so story as to why such perversity would’ve been adaptive for ancestral hominids in East Africa is left to commenters. 😉
A final hypothesis is that there are no generalizable lessons here. That each case of repeated forecasting errors in the same direction happens for its own unique, idiosyncratic reasons.
Looking forward to your comments.
*I’m reminded of repeated failed predictions of near-term societal collapse due to overpopulation. Yes, it’s true that human population growth can’t continue forever–but that fact is totally useless for forecasting human population growth on any time scale relevant to human decision-making.