By Daniel Kahneman
Daniel Kahneman won the Nobel Prize in economics for some of his foundational work in how people make decisions. In this book, he overviews his research on the psychological aspects of decision-making, laying the principles out clearly enough that a casual reader can understand how they work. (Kahneman's work is referenced heavily in Michael Lewis' new book The Undoing Project).
To help us understand decision-making, Kahneman distinguishes "fast" and "slow" thinking by describing "System 1" and "System 2" (not actually distinct systems, but analytic categories). "System 1" is fast, intuitive, and automatic, while "System 2" is slow, deliberate, and controlled (p.13). "System 2" is a "lazy controller" (Ch.3), so most of the time we make decisions based on "System 1." And it turns out that "System 1" is not very good at math (which makes sense, since we as a species have not had math for very long). It instead relies on heuristic decision-making, which tends toward systematic biases, and is vulnerable to priming, in which framing a problem a specific way leads to bias toward a specific decision.
He illustrates these points with several studies, typically experimental studies in which he and his collaborators ask people to solve problems like this one:
A bat and ball cost $1.10.
The bat costs one dollar more than the ball.
How much does the ball cost? (p.44)(The answer is not "ten cents.")
Kahneman contends that the result is that we often think we are making reasoned decisions when the results are actually suboptimal. For instance, stockbrokers tend to overestimate their performances, which according to Kahenman are no better than random.
It's a good book, and Kahneman explains the different concepts well. But I can pinpoint the page where I realized that I just didn't like him.
It's p.244, at the end of Ch.22, in which he discusses his collaboration with Gary Klein. Klein studies how experts such as firefighters make intuitive decisions, and people in his school "adamantly reject the focus on biases in the heuristics and biases approach. They criticize this model as overly concerned with failures and driven by artificial experiments rather than the study of real people doing things that matter" (p.235). Kahneman (decently) reached out to Klein to "join in an effort to map the boundary that separates the marvels of intuition from its flaws" (p.235). The two disentangled one important point of disagreement:
we had different experts in mind. Klein had spent much time with fireground commanders, clinical nurses, and other professionals who have real expertise. I had spent more time thinking about clinicians, stock pickers, and political scientists trying to make unsupportable long-term forecasts. Nor surprisingly, his default attitude was trust and respect; mine was skepticism. He was more willing to trust experts who claim an intuition because, as he told me, true experts know the limits of their knowledge. I argued that there are many pseudo-experts who have no idea that they do not know what they are doing (the illusion of validity), and that as a general proposition subjective confidence is commonly too high and often uninformative. (p.239, my emphasis)The two conclude that, to develop true intuitive expertise rather than the illusion of validity, experts need:
- "an environment that is sufficiently regular to be predictable"
- "an opportunity to learn those regularities through prolonged practice" (p.240)
Rapid feedback allows people the ability to develop intuitive skills—short-term feedback such as the kind that anesthesiologists get in the emergency room, as opposed to the slower, long-term feedback that is received by psychotherapists or financial experts (p.240). Kahneman and Klein thus concluded that some experts really could develop intuition, and they delimited the conditions under which this could happen.
Kahneman concludes by reaffirming something he mentioned in passing on p.239:
we also found that our early differences were more than an intellectual disagreement. We had different attitudes, emotions, and tastes, and those changed remarkably little over the years. This is most obvious in the facts that we find amusing and interesting. Klein still winces when the word bias is mentioned, and he still enjoys stories in which algorithms or formal procedures lead to obviously absurd decisions. I tend to view the occasional failures of algorithms as opportunities to improve them. On the other hand, I find more pleasure than Klein does in the comeuppance of arrogant experts who claim intuitive powers in zero-validity situations. (p.244)
Put me firmly in the Klein camp—if the occasional failures of algorithms are opportunities to improve them, why not see the failures of experts as an opportunity to improve them? But Kahneman seems much more interested in shaming experts. Earlier in the book, he recounts telling investment advisors that, based on his analysis, the correlation between their picks and outcomes was zero:
When we were done, one of the executives I had dined with the previous evening drove me to the airport. He told me, with a trace of defensiveness, "I have done very well for the firm and no one can take that away from me." I smiled and said nothing. But I thought, "Well, I took it away from you this morning. If your success was due mostly to chance, how much credit are you entitled to take for it?" (p.216)
Telling a man that his life is a lie and his expertise is meaningless? NBD, just a bit of fun.
This incident gives a new perspective on a distinction that Kahneman discusses in Ch.25. He notes that economists and psychologists "seemed to be studying different species," denoted "Econs" and "Humans." Economists characterize Econs as rational and selfish; psychologists characterize Humans as neither fully rational nor fully selfish (p.269). Kahneman makes the case for Humans throughout the rest of the book. But I think he would have profited by talking to people in other disciplines—such as sociology or anthropology—who could shed light on other aspects of human behavior. For instance, in Ch.32, he discusses how framing risk leads to irrational decisions. One example is a survey of parents:
The respondents were told to imagine that they used an insecticide where the risk of inhalation and child poisoning was 15 per 10,000 bottles. A less expensive insecticide was available, for which the risk rose from 15 to 16 per 10,000 bottles. The parents were asked for the discount that would induce them to switch to the less expensive (and less safe) product. More than two thirds of the parents in the survey responded that they would not purchase the new product at any price! ...
Anyone can understand and sympathize with the reluctance of parents to trade even a minute increase of risk to their child for money. It is worth noting, however, that this attitude is incoherent and potentially damaging to the safety of those we wish to protect. ... The taboo tradeoff against accepting any increase in risk is not an efficient way to use the safety budget. In fact, the resistance may be motivated by a selfish fear of regret more than a wish to optimize the child's safety. (pp.350-351)A sociologist or anthropologist might counter that (a) people say things in surveys that they don't actually do, so it's unclear whether people actually make decisions this way—why not look at actual market data to increase ecological validity? and (b) self-image and social standing are also goods. People find some risks more socially acceptable than others, and the social aspects can outweigh others. To pick one example, many would agree with Emiliano Zapata that "It is better to die on your feet than to live on your knees"—a claim that seems to have no place in Kahneman's discussion of rational risk.
I'll end this review with one more anecdote. Kahneman recounts "a family debate about moving from California to Princeton, in which my wife claimed that people are happier in California than on the East Coast. I argued that climate is demonstrably not an important determinant of well-being ... and tried in vain to convince my wife that her intuitions about the happiness of Californians were an error of affective forecasting" (pp.402-403). So he decided to test the proposition by studying the question of whether Californians were actually happier—part of a funded study of global warming effects. "As we analyzed the data, it became obvious that I had won the family argument" (p.403). This validated his decision to accept the position at Princeton.
Wikipedia's page on Kahneman tells us: "Kahneman is married to the award-winning cognitive psychologist Anne Treisman. They live part-time in Berkeley, California."
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.