More than once in the past I’ve plugged eminent philosopher Bill Wimsatt’s wonderful 1987 book chapter on false models. It’s one of the best things I’ve ever read, and it had a big influence on me. But few of you have clicked through and read it, and I’m guessing that’s not because you’ve all read it already. So I decided to write the blog post equivalent of a “trailer” for Wimsatt’s “movie” (alert: spoilers ahead!)*
Wimsatt’s chapter originally was published in a 1987 book on “neutral” models in biology. So he starts by talking a bit about what a “neutral” model might be. He notes, correctly, that it can’t mean a model that is free of simplifications or biases, since all models make simplifications and therefore have biases. In the rest of the paper, Wimsatt goes on to explain how the fact that all models are simplified–and therefore false–doesn’t prevent models from being useful. In fact, models are useful because they’re false, not despite being false. It’s not that simplifications are unavoidable, so we reluctantly live with them. Simplifications are essential, and we’d want to make them even if we didn’t have to.
Let me repeat that, for empirically-oriented readers who lament “unrealistic” models: models are useful to empiricists BECAUSE they’re false, not despite being false.
Wimsatt then goes further and provides a taxonomy of the many different ways in a which a model can be false, and how different false models are useful for different reasons and for different purposes. This is the heart of the chapter. I’ve always found statistician George Box’s famous line about how “all models are false, but some are useful” to be unsatisfyingly question-begging. How are they useful? Wimsatt’s paper provides the answer–or actually answers (plural).
As a teaser, here’s Wimsatt’s list of the ways in which a model can be false. It’s ordered in terms of increasing seriousness (except for 6 and 7):
(1) A model may be of only very local applicability. This is a way of being false only if it is more broadly applied.
(2) A model may be an idealization whose conditions of applicability are never found in nature, (e.g., point masses, the uses of continuous variables for population sizes, etc.), but which has a range of cases to which it may be more or less accurately applied as an approximation.
(3) A model may be incomplete–leaving out 1 or more causally relevant variables. (Here it is assumed that the included variables are causally relevant, and are so in at least roughly the manner described.)
(4) The incompleteness of the model may lead to a misdescription of the interactions of the variables which are included, producing apparent interactions where there are none (“spurious” correlations), or apparent independence where there are interactions–as in the spurious “context independence” produced by biases in reductionist research strategies. Taylor, (1985) analyzes the first kind of case for mathematical models in ecology, but most of his conclusions are generalizable to other contexts. (In these cases, it is assumed that the variables identified in the models are at least approximately correctly described.)
(5) A model may give a totally wrong-headed picture of nature. Not only are the interactions wrong, but also a significant number of the entities and/or their properties do not exist.
(6) A closely related case is that in which a model is purely “phenomenological.” That is, it is derived solely to give descriptions and/or predictions of phenomena without making any claims as to whether the variables in the model exist. Examples of this include: the virial equation of state (a Taylor series expansion of the ideal gas law in terms of T or V.); the automata theory (Turing machines) as a description of neural processing; and linear models as curve fitting predictors for extrapolating trends.
(7) A model may simply fail to describe or predict the data correctly. This involves just the basic recognition that it is false, and is consistent with any of the preceding states of affairs. But sometimes this may be all that is known.
Wimsatt then goes on to list 12 productive things one can do with false models. Of course, not all of these things are possible with every model; it depends on how the model is false, and on other factors. He suggests that the most productive kinds of falsity listed above often are #2 and #3, though all sorts of falsity except #7 can be useful in the right circumstances. Here’s his list of the 12 productive things one can do with false models:
(1) An oversimplified model may act as a starting point in a series of models of increasing complexity and realism.(2) A known incorrect but otherwise suggestive model may undercut the too ready acceptance of a preferred hypothesis by suggesting new alternative lines for the explanation of the phenomena.(3) An incorrect model may suggest new predictive tests or new refinements of an established model, or highlight specific features of it as particularly important.(4) An incomplete model may be used as a template, which captures larger or otherwise more obvious effects that can then be “factored out” to detect phenomena that would otherwise be masked or be too small to be seen.(5) A model that is incomplete may be used as a template for estimating the magnitude of parameters that are not included in the model.(6) An oversimplified model may provide a simpler arena for answering questions about properties of more complex models, which also appear in this simpler case, and answers derived here can sometimes be extended to cover the more complex models.(7) An incorrect simpler model can be used as a reference standard to evaluate causal claims about the effects of variables left out of it but included in more complete models, or in different competing models to determine how these models fare if these variables are left out.(8) Two false models may be used to define the extremes of a continuum of cases in which the real case is presumed to lie, but for which the more realistic intermediate models are too complex to analyze or the information available is too incomplete to guide their construction or to determine a choice between them. In defining these extremes, the “limiting” models specify a property of which the real case is supposed to have an intermediate value.(9) A false model may suggest the form of a phenomenological relationship between the variables (a specific mathematical functional relationship that gives a “best fit” to the data, but is not derived from an underlying mechanical model). This “phenomenological law” gives a way of describing the data, and (through interpolation or extrapolation) making new predictions, but also, because its form is conditioned by an underlying model, may suggest a related mechanical model capable of explaining it.(10) A family of models of the same phenomenon, each of which makes various false assumptions, has several distinctive uses: (a) One may look for results which are true in all of the models, and therefore presumably independent of different specific assumptions which vary across models. These invariant results (Levins’ (1966) “robust theorems”) are thus more likely trustworthy or “true”. (b) One may similarly determine assumptions that are irrelevant to a given conclusion. (c) Where a result is true in some models and false in others, one may determine which assumptions or conditions a given result depends upon. (See Levins 1966, 1968, and Wimsatt 1980a and chapter 4 for more detailed discussion).(11) A model that is incorrect by being incomplete may serve as a limiting case to test the adequacy of new more complex models. (If the model is correct under special conditions, even if these are seldom or never found in nature, it may nonetheless be an adequacy condition or desideratum of newer models that they reduce to it when appropriate limits are taken.)(12) Where optimization or adaptive design arguments are involved, an evaluation of systems or behaviors which are not found in nature, but which are conceivable alternatives to existing systems can provide explanations for the features of those systems that are found.
Wimsatt concludes with a historical case study (an important episode in the history of genetics, involving interpretation of Thomas Hunt Morgan’s data on crossing over). The case study shows many different uses of false models in action. I think this case study is a great choice, because it’s about a very empirical episode, concerning the practical interpretation of real data. And while it’s a very detailed case study, I think that detail is indispensable. It’s what moves the paper beyond general, abstract claims to demonstrating the concrete relevance of those claims.
Bill Wimsatt was an engineer before he became a philosopher, and it shows. He’s very practical. He’s interested in understanding how science actually works in the real world, not in some hypothetical idealized world. This chapter is applied philosophy at its best–taking seriously both generally-applicable conceptual principles, and the nitty-gritty details of specific situations.
If you’re looking for something for your reading group to read, you could do a lot worse than having them read Wimsatt 1987. Ask them to think about how the general principles Wimsatt describes apply in the context of their own work, or in the context of ecology more generally. Can you think of ecological examples of every sort of falsity Wimsatt lists, and every use of false models? Do you think ecologists tend to focus on models that are false in certain ways, and on certain uses of false models? If so, is that a problem? And is there usually an appropriate match between the way in which the model is false and the uses to which it is put? What obstacles are there to all ecologists making better use of false models? Cultural obstacles, training obstacles, other obstacles? Etc.
*Footnote: Wimsatt’s chapter isn’t (and isn’t intended to be) a complete list of all the things models are good for. Modeling is useful for other reasons, including reasons that don’t have to do with empirical data. Ecological theoretician Amy Hurford has a more complete list of all the reasons one might do modeling. It’s great, you should check it out.
I agree that false models can result in useful things (whatever “false” means anyway). However, I feel it should also be stressed that progress in science has generally still come from people who found a “better”, more “correct” model (apart from specific cases such as null-models, where a “wrong” model is deliberately chosen to show that it fails).
To be clear, neither you nor Wimsatt suggested that we shouldn’t search for the better model, and you’re of course right to point out that a model doesn’t need to appear “realistic” to an empiricist to be useful (I would argue that a simple model can still have some truth in it). My concern though is that it’s a small step from saying “false models can be useful” to “I don’t care if you say my model is wrong, it can still be useful”. Weren’t you making the same point in your previous post https://dynamicecology.wordpress.com/2012/09/27/being-influential-doesnt-compensate-for-being-wrong/ ?
Yes Florian, absolutely, one often can and should search for better, more correct models. The point of this post was to encourage people to think in a much richer and more sophisticated way about what it means for a model to be false, and about the inferences we can draw from false models. There’s more to model falsehood than just “how far from the truth is this model?” And there’s much more one can do with false models besides “try to modify them so that they’re closer to the truth”, although of course that is an important thing to do.
agreed – I just wanted to point out that the fact that good things can come out of “false models” shouldn’t be interpreted as a carte blanche for using oversimplified or wrong models. After all, a lot of the “12 uses” seem to go in the direction of arriving at better models as well.
“After all, a lot of the “12 uses” seem to go in the direction of arriving at better models as well.”
Well, yes, if by “better model” you mean something broad like “a more accurate picture of how nature works”. But the key takeaway for me is how the falsehood of your models is essential to helping you develop better ones.
Again, it’s not that falsehood in models is sadly unavoidable, something you want to avoid or minimize whenever possible. Wimsatt’s view of how we use false models to make progress in science is much richer and more sophisticated than the standard “Use model to make prediction-test prediction-reject model because you’ve discovered it’s false-replace with less false model-repeat.” Wimsatt’s article is about the many ways false models can help us learn about the world in many other ways besides making testable predictions. It’s also about how we localize and isolate the ways in which our models are false, somewhat like an engineer might troubleshoot a complex mechanical device.
Pingback: Freeday picks | Seeds Aside
Pingback: Links 5/5/13 | Mike the Mad Biologist
Pingback: “Null” and “neutral” models are overrated | Dynamic Ecology
Pingback: ESA Monday review: Tony Ives rocks | Dynamic Ecology
Pingback: What are simple models for? Two must-read views from economics | Dynamic Ecology
Pingback: Three types of mathematical models | Theory, Evolution, and Games Group
Pingback: On the value of simple limiting cases: Lotka-Volterra models and trolley problems | Dynamic Ecology
Pingback: Are all models wrong? | Theory, Evolution, and Games Group
Pingback: Why ecologists might want to read more philosophy of science | Dynamic Ecology
Pingback: The power of “checking all the boxes” in scientific research: the example of character displacement | Dynamic Ecology
Pingback: Steven Frank on how to explain biological patterns | Dynamic Ecology
Pingback: Rogers’ paradox: Why cheap social learning doesn’t raise mean fitness | Theory, Evolution, and Games Group
Pingback: Have ecologists ever successfully explained deviations from a baseline “null” model? | Dynamic Ecology
Pingback: What are some good examples of ecologists using multiple models to bracket reality? | Dynamic Ecology
I have been a close friend of Bill Wimsatt since 1979, but even before that (about 1975) I had set one of my life goals to make his work more accessible to science students. After working with him from 79-86
, I invited him to be one of the founding members of the BioQUEST Curriculum Consortium. I entreat you to read his module in the BioQUEST Library on modeling . I think that you might appreciate it more than most.
Thanks for you advocacy post.
Thanks John, I’ll be interested to look into that Library.