A post-fact world: Part II – what our social scientist colleagues already know about human thoughts and behavior

This is the second post in a three part series on being a scientist in a post-fact world. The first post explored the history of how we got here. This post focuses on the fact that social scientists have pretty much known for a long time that most humans decisions are not taken based on facts and helped by increased understanding. The third will attempt to look at what scientists living in a post-fact world should do.

I am by no means a social scientist. But I most definitely recognize their existence and the validity of the work. Therefore as an honest scientist, I should first look at what is in the literature and rigorously studied before leaping to and giving primacy to my own intuitive ideas about how people’s minds work.

If I were to summarize the findings in a few words its that humans don’t base their thinking and behavioral decisions on fact-informed logic. Computers do. Spock does (did?). Academics pretend we do. Scientists arguably do in the aggregate across all scientists, but demonstrably don’t as individuals. And most humans don’t even pretend to be logical-factual in their decision making processes.

Here is a blitzkreig summary of social science literature on human decision making (so short as to be almost insulting to the complexities of the field, but hopefully digestible):

  • Humans are short term thinkers – economists have formalized this in the notion of the discount rate. Businesses often use a discount rate of about 8%. That means that $100 today is worth about twice as much as $100 given to me 9 years from now, and four times as much as $100 given to me 18 years from now. Note this has nothing to do with inflation. It is a statement about how much humans are willing to defer gratification. Economists treat this as a rational behavior. You may or may not see this as rational. But it sure explains a lot about why it is really really hard to get people to care about graphs of the impacts of climate change that have a title with the year 2080 in it. Problems in 2080 press on me about 0.52% or roughly 1/200th as much as the problems I have today. Climate change is going to have to to be 200x more impactful on my life than finishing my dissertation, get tenure or raising my kids are this year.
  • Humans have diverse, ordered needs – Maslow famously identified a hierarchy of needs. The idea is that humans only worry about higher level needs after lower level ones are addressed. The first priorities are physiological (food+warmth), then security (physical safety). Then comes social belonging/love followed by esteem/prestige. At the top of the list is self-actualization. Now, to be sure Maslow’s hierarchy is a simplification, and it has been corrected and amended to death, but the original idea remains compelling for capturing some core ideas. Where does keeping the planet protected come in? Where does appreciating a cool butterfly? or appreciating biodiversity?  They’re certainly pretty high in the list (i.e. low priority), quite probably only at the tippy-top actualization level. UNLESS they become part of social belonging and self-esteem, which leads immediately to …
  • Humans make a lot of choices as expressions of identity and belonging – A great deal of human behavior is made as a signal of what group one belongs to more than a carefully thought out story. A great example is climate change. Circa 2000 polls showed that the best demographic predictor of belief in climate change was education level (more education made one more likely to believe in climate change). But over the 2000s climate change became simply a predictor of political affiliation (in the US Democrats were more likely to believe in climate change than Republicans). Climate change become a badge of identity and belonging rather than a fact evaluated based on education. In short, social calculus explains much of our thinking and behavior and explains many things that would otherwise seem irrational.
  • Humans have dual circuits for thinking – Daniel Kahneman won a Nobel prize for his research on this topic. He calls them fast and slow thinking. Fast thinking can do amazingly complicated things including reading a billboard at high speed. But it is not based on logic or abstract thinking. And it is subject to many biases and fallacies – in short to many errors. Slow thinking is basically our Spock and the only place that applies logic and facts to arrive at novel conclusions. But the point is that a surprising number of our decisions are taken by the error-prone fast thinking part of our brain.
  • Human emotions drive much decision making – This comes as a shock to classical economists but not to psychologists nor advertising executives. This simple idea has led to the far more effective campaigns against smoking that involve television campaigns and pictures on cigarette packages that involve disgusting teeth, people recoiling from the smoker etc vs the older campaigns that just put a black-and-white print message on cigarette packs saying “Warning Surgeon General has determined that cigarette smoking is dangerous to your health”

Before summarizing the implications of this for a post-fact world, let me briefly comment on how this relates to scientists:

  1. Scientists are a very unusual subset of human personalities – in a nice paper by Weiler et al 2012 they use the Myers-Briggs personality assessment*. Myers-Briggs identifies four roughly independent axes of personality variation. So each individual’s personality is located in a 4-dimensional space. They compared US climate scientist’s personalities (and other types of scientists) to the general public. What they found is that on one axis (extrovert vs introvert) scientists were no different than the general public (about a 50/50 split in both cases). But on the other 3 axes, scientists were statistically significantly biased towards one end. Most importantly, scientists are more intuiters while the general public is more sensers. These words can be a little misleading but basically intuiters work with abstract thinking, while sensers work with concrete sensory-driven thinking. Scientists are also more thinkers (analytical and looking for cause and effect) than feelers (focused on empathy and personal relationships) (this is the weakest distinction of the three significant axes). Finally scientists are more judgers (prefer linear processes leading to crisp outcomes) than perceivers (happy to follow non-linear processes retaining a cloud of ambiguity). Long before this study, psychologists took the 2^4 (=16) corners of the 4-D space and identified jobs that people with that personality type were typically found in. One personality INTJ (introvert, intuiter, thinker, judger)  was actually labelled the scientist box. But INTJ is a rare corner – it is about 1.5% of the population (note that by default each corner should have 6.25% of the population). And ENTJs (extroverts but sharing the other 3 personality traits with INTJ and most scientists) are another 4% of the population So if you read the first part of this post and said “the people I hang out with aren’t this irrational”, it is because you are hanging out with a very weird outlying 5% of the population who are demonstrably unusually prone to linear thinking about cause-effect and abstract processes.
  2. Scientists work by belief too – But it would be a serious mistake to take this as a badge of exclusionary elitism. All that data above about irrational thought processes applies to scientists too (we are humans before we are scientists). Get really honest with yourself about why you believe in climate change. Is it because you understand the details. Do you know the laws and equations of black-body radiation. It only requires high school algebra and geometry to calculate a rough heat balance of the earth. Have you done it? Can you name the key experiments and observations confirming this theory? Or are you using social processes to decide who you trust and believing in climate change because others say so? And what is the source of your knowledge about CO2 being a greenhouse gas – can you explain why? have you looked at empirical data? or do you “know” it because somebody you trust told you its true? Don’t worry everybody thinks using the “social computer” of many people communicating to each other. We would be absolutely paralyzed if we had to deeply know everything ourselves – even as scientists we have to trust other people to build our cognitive world view  (see an interesting-looking book on The Knowledge Illusion – Why we are never thinking alone). We differ from non-scientists only in our criteria for picking who to trust, not in avoiding “knowing” by “trusting”. Or to take a different line of argument, do you know of scientists who have a pursued an idea when it seems hopeless? sometimes they turn out right (and then become famous). One example is the story of Barry Marshall who discovered and proved that stomach ulcers are caused by a bacteria. Starting only with correlational evidence (most ulcers had the same bacteria), her persisted not only against expert opinion but through a series of experiments that would appear to reject his idea, before later experiments confirmed his idea. Lakatos recognizes this aspect very clearly – the hardened core assumptions are chosen by belief and cannot be rejected or accepted. And Kuhn’s idea of scientific revolutions clearly showed that scientists belief systems play an important role. So scientists work by beliefs and trusting others even within science! What is special about science is that we have an adversarial system in which the rules of logic win over majorities of scientists, not because individual scientists are more Spock-like than the rest of humanity.

Summary

People decide what to believe and take decisions on how to behave through a complex process in which emotions, social considerations, and error-prone “fast” thinking circuits all play a large role. The logical, analytical, abstract, fact-driven slow-thinking circuits are used fairly rarely. Scientists tend to use their slow-thinking circuits (I’m making a few leaps here) more than most people and so it is bad to generalize from how we think. But even scientists tend to primarily think with emotions and social considerations and “know” things that would probably better be described as “trust” or “believe”. And even within the realm of logical thinking it is not obvious how things far in the future or high up in the Maslow hierarchy of needs should be weighed in. The New Yorker made a similar argument recently in an article entitled “Why facts don’t change our minds?

So what do you think? Am I overstating the irrationality of human behavior? Of scientists? Should every human think completely rationally and be data-driven about whether to smoke or worry about climate change? Whether they should or not, do they think purely rationally? What does this mean for scientists who want the world to change based on facts we discover?

 


*Again social scientists have moved beyond Myers-Briggs and lean more to the Big Five Personality traits which has more empirical justification. But Myers-Briggs has been used for decades, many people know how to interpret it, and it has close relationships to the Big Five. Just my opinion, but if you’ve never taken a Myers-Briggs test, it is worth the 20 minutes to see where you come out. You can find lots of decent tests online these days and it may give you some insights about yourself, but mostly it will give you some insights that not everybody else thinks the same way you do.

11 thoughts on “A post-fact world: Part II – what our social scientist colleagues already know about human thoughts and behavior

  1. This is a very useful post though i wish you had linked it more closely to the recent marches in DC…just for clarification, Daniel Kahneman won the Nobel in Economics for his early work w Tversky, not his recent “fast/slow” ideas..

    • It is true that Kahenman’s fast-slow book came after his Nobel prize and that is where he most explicitly sets out the fast-slow hypothesis, but the work reported in the book is largely his-Nobel prize winning work w/ Tversky on enumerating many forms of cognitive bias and “irrationality”. It would probably be hard to draw a sharp dividing line between the two.

  2. Can’t agree more. Also scientists are only trained to use “slow thinking” in our own field. When it comes to certain matters elsewhere, I have seen scientists become much more irrational than the general public.

    Muyang

  3. I think you’re basically right, but I also think logical thinking is not a pure personality trait, but much more culturally shaped: more children than grow up to be logical thinkers have the capacity, but are not given the environment that rewards such thinking. In some subgroups in the US, for instance, there’s a theologically-based objection to “critical thinking,” in education in favor of obedience to authority, and when I was in college, a growing anti-logic sentiment among the left wing in favor of “creativity” and “spirituality” in the service of anti-authoritarianism. Fostering logical thinking in those with the mindset for it requires positive reinforcement when the child exhibits it (and this is inconvenient for parents and teachers in so many circumstances. Easier to squash it.)

    I am a fact-based logical thinker in some areas, by training from a mother who had both an engineer-mind and training, and with degrees in history and biology. Given a different childhood environment, I might be more, or less, fact-based and logical. To my mother, I was insufficiently willing to follow directions (more of the “Let’s see what happens if I…” type), intensely curious about too many things, eager to get doing and try things, capable of intense concentration (ignoring anything else in the room) and determination (told something was impossible, I would ignore advice and try it anyway.) But her influence made me almost as safety-aware as she was–I was taught to look ahead, to play the long game, anticipate problems and have plans B through R ready in advance. I have done some risky things, but carefully, aware of dangers; I’m more flexible than I would otherwise have been; I’m still prone to being more hasty then she was.

    So I look at the tests of innate personality with some skepticism about the “innateness” and awareness of how much opportunity affects the early shaping of a child’s cognitive abilities. Most other girls my age did not have the opportunity to learn what I learned from living with an engineer mother. (I, on the other hand, did not have the opportunity to learn from a socialite.)

  4. Pingback: A post-fact world: Part III -what is a scientist to do? | Dynamic Ecology

  5. I think this Part II is my favorite in the series for a few reasons, and is also relevant to what I most want to say, so I’m going to make these comments here.

    If I had to summarize your overall argument, regarding the causes of anti-science stances among the public–and correct me for sure if this is not right–it would be that these have various forms and roots, depending on the situation and topic, but rarely if ever are they due to the quality of the science (and/or scientists) itself. From that basis we then logically move to discussing possible solutions to the problem, e.g. the “deficit model” is based on more and/or better public education and so forth.

    Ignored or de-emphasized in that argumentation, is the idea that the scientists themselves may be doing something wrong, scientifically. This “wrongness” may take many forms of expression and span the severity spectrum, ranging from flat out wrong inference due to bad methods or data, to slight (but cumulative) exaggerations of findings or of their importance. This is where I take issue, and with climate science in particular, because I’ve seen a lot of very aggravating stuff along those lines in the last eight years, way way too much to detail.

    You note that scientists take a lot on faith, of necessity, and I agree with your observation and reasoning as to why. But this does not mean it’s good practice, only that it’s a convenience and/or necessity (convenience I would argue, ranging into laziness or unearned trust). There is a huge difference between a general trust in science as a “big thing” that generally gets things right over time, versus the findings of individual studies or groups thereof in particular sub-disciplines at particular times and/or places. I pay basically zero attention to arguments along the lines of “it’s science, it works”. It works, yes, when people who know what they’re doing follow solid practice, fully acknowledge uncertainty and don’t embellish or over-sell what they’ve done.

    I note the following observations.

    Climate science is inherently complex because it involves both radiation balance and biogeochemical (C) cycles, each of which is itself multi-component and multi-process. It is inherently difficult because global scale analysis requires global scale data, which is often a very big challenge. The noise in the system is *enormous*, due to the extreme non-equlibrium nature of the kinetic and thermal energy in the system, and since evidence for CO2 effects is mainly a big attribution problem, you have to be able to account for other major radiative forcings (RFs) over time and globally, including especially, primary and secondary aerosol effects (i.e. direct and cloud-induced albedo), but also multi-decadal ocean dynamics and land use changes, and that’s still not a full list. Well, we don’t have good, global aerosol or cloud data going back more than about three decades–that’s not long enough, given global T variations and time scales of GHG increases. There are also disagreements about ocean circulation and thus, heat transport. And more.

    On climate sensitivity–the equilibrium response to a doubling of CO2 equivalent RF. Big issues again. The canonical range has been 1.5 to 4.5 deg C from 1979, and has hardly changed at all. And that is not describing a normal distribution arising from a systematic parameter space exploration–no, no–the global models are instead a haphazard collection within uncertain relation to each other, i.e. true independence. And opaque–we don’t have the code for these models and even if we did, what would we do with it? And there are issues in whether to focus on transient vs equilibrium sensitivity, given that the former is potentially testable with modern empirical data, whereas the latter is not. You’ve got huge, elaborated discussions and arguments going on about theoretically-derived predictions that are not even testable. Read that sentence again.

    Here’s a more concrete example. About 3 months ago I decided I really needed to have a better understanding of the ocean carbon cycle, given how important it is in determining atmospheric CO2. This by itself is an enormous field involving all kinds of ocean circ., carbonate chemistry and related studies going back to the 1950s. One of the 3 or 4 main pillars of anthro climate change science is that atmospheric CO2 has an enormous lifetime–many tens of thousands of years. This is due, putatively, to a cap on ocean absorption arising from what is known as the Revelle buffer effect–as the surface ocean acidifies, diffusion from the atmosphere slows because of a loss of carbonate buffering capacity. In going back and reading Revelle’s work (1950s), and a whole bunch of other related papers since, some major questions arose regarding the spatio-temporal operation of this “Revelle Effect” and it’s implications for long term (decades to a few centuries) CO2 absorption. I found that different experts had very different ideas on these issues, and could find no paper by any of them that succinctly and clearly described, and defended, just exactly how the slow ocean circulation interacted with this Revelle Effect to limit CO2 absorption. Nevertheless, a few researchers have elevated this conclusion to the level of an absolute plank in the global warming discussion.

    Note that I am *not* arguing here that these people are wrong–I don’t know yet, still exploring this. But I AM arguing, that at the very least, they are a million miles away from anything like clarity and transparency on this very central issue in their argument–with other scientists who really want to understand it, let alone the general public. As with GCMs (worse actually), the code for these ocean C and circ. models is generally unavailable, and even if it were, could we follow it? Not likely. So they’re basically saying “trust us, we know what we’re doing here” and that’s also the reaction I’ve gotten when I’ve tried to raise and explain the issue to some climate and carbon scientists by email and Twitter. For some of them, I could instantly tell that they in fact had no idea what I was getting at–I was way past them. They didn’t even know what the Revelle Effect was exactly.

    Much more to say but out of time.

  6. I think there are some fatal flaws in the “why-facts-don’t-change-minds” research, both in general and with respect to science in particular.

    Perhaps the most important reason that “facts” don’t change our minds so easily is that much of what’s related to us as fact is actually bullshit. Humans are pathological liars, misrepresenters and misunderstanders and that means other humans have to be careful about believing everything that someone else claims is a “fact”.

    In the research referenced in the New Yorker article, researchers first lie to people about something (say, present them with a made-up study), then expect those same people to believe the same researchers when they present them with a new study and say “OK, here’s the real data.” Then they present them with yet another change in the “facts” and expect the subjects to simply accept what they say as “fact”? I think that’s quite a stretch!

    When the New Yorker article claims “Providing people with accurate information doesn’t seem to help; they simply discount it.” Well, Duh! Of course they do. The Gormans *say* their information is “accurate” but why should anyone believe them? How do we know that their “statistics” about guns aren’t manipulated to portray their point of view? The simple answer is this: We don’t

    With respect to science: what is touted as “scientific fact” in the media regularly later turns out to be wrong on issues both large and small. Some big ones are Peak Oil (twice) and The Population Bomb; lesser more recent ones are the efficacy of early cancer detection and dangers of dietary cholesterol. “Scientists” claim something is fact, then they claim the opposite, x-years later. So why should the general population not assume that today’s facts will be tomorrow’s fallacies and resort to their own decision making process?

    So really, when people reject what scientists claim as fact they often *are* thinking rationally and being data driven. They’re using the evidence that they’ve accumulated – science is often wrong on issues both big a small – and applying it appropriately to the problem.

    I don’t think the “why-facts-don’t-change-minds”, the “slow-fast thinking”, or the “why are people irrational” research is getting us anywhere because it presumes that we should accept what we’re told at face value or on the basis of authority and almost all humans know implicitly that neither is an assurance of truth.

    • I agree with your points. There was a time some decades ago when it was popular to pontificate on just what it was, exactly, that distinguished human beings from other animals. All kinds of ideas were thrown out, usually in the form of a positive attribute–e.g. humans have a “soul”, humans have a conscience, humans have compassion, etc….this, that and the other. But for a long time now I’ve felt that it’s none of those but rather the ability to consciously be deceptive, including to oneself.

      In addition to the outright wrong claims, there’s the flood of exaggerated and over-stated claims in papers, and related to it, the degree to which the impressions or outright meanings of article titles, abstracts and papers do not deliver the same message. Some skeptical kooks might even be inclined to believe that paper authors and journals are quite aware that very often only the abstract gets read, or even only the title, and so they over-state things therein.

      Aw hell no, scientists and scientific publishers, would never stoop to that–I mean that could harm the reputation of the entire enterprise.

      • “ability to consciously be deceptive, including to oneself.”

        I don’t know if it’s only humans anymore, I saw a video of an elephant that tricks other elephants into doing it’s work. 🙂

        I definitely agree that there is a lot of, er, “stretching” in papers. One of my colleagues used to describe his interpretations as a “permissible” – i.e., the data don’t outright reject them! But its also just plain old fun to build a good story out of one’s data. And when you get to what’s in the press, there are multiple levels where things can get distorted, from the scientist to the univ press office to the writer to the people who write headlines.

        Of course science often reverses itself, and in science everyone understands that. But there’s a clear difference between “best conclusion” and “certain to be true.” When you go public with “certain” and then have to backtrack, that’s not a good thing.

Leave a Comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s