Thinking, Fast and Slow: A Review

Thinking, Fast and Slow is an extremely well-known book by the Nobel Prize-winning psychologist, Daniel Kahneman. In his book, Kahneman outlines how the two mental processing systems of the human mind relate to different cognitive tendencies and biases. Through personal research and real-world examples, he illustrates the consequences of believing our own minds. 
Kahneman’s book describes a myriad of human fallacies throughout the book, each paired with their logical counterpart. From ignorance to regression to blindness to risk, he explains why humans continuously fail to reason with logic. Through the exploration of each, Kahneman prepares readers to appreciate his own Prospect Theory. 

MENTAL PROCESSING SYSTEMS

Kahneman’s insights throughout the book are based on “System 1” and “System 2,” the distinct mental operating systems originally proposed by psychologists Keith Stanovich and Richard West. System 1 drives involuntary, automatic processes: biases, intuitions, impulses, assumptions, and feelings. System 2, on the other hand, is responsible for everything conscious and effortful. This includes detailed, specific, and complex processing, like solving a calculus problem. To illustrate the dynamic between the two systems, Kahneman invites us to recall a moment when we refrained from losing our temper at someone particularly infuriating. By choosing not to yell, we overcame the impulse of System 1 and activated System 2 in an exercise of self-control. 
Kahneman explains that the systems have an incredibly efficient “division of labor,” which is important in understanding their roles. By relying almost exclusively on the low-effort System 1, the mind “minimizes effort and optimizes performance,” only recruiting System 2 when necessary. It becomes “mobilized when a question arises for which System 1 does not have an answer,” or sometimes, not at all. As humans, says Kahneman, we naturally “identify with System 2, the conscious, reasoning self that has beliefs, makes choices, and decides what to do,” despite System 1 providing the underlying biases and assumptions that shape its behavior.


REGRESSION TO THE MEAN

Through effective examples, Kahneman illustrates the statistical principle of regression to the mean, as well as our tendency to ignore it. Our ignorance of statistical reasoning in practical decision-making often leads to unchecked risks, and Kahneman’s explanation of the phenomenon made it particularly clear. 
Regression explains that, following an extreme outcome, the following outcome is statistically likely to “regress,” or land closer to, the mean. Regression is robust as a statistic: instances of regression have no causal link between consecutive outcomes, because one also “observe[s] regression when you predict an early event from a later event.” For example, an extreme result in a second trial indicates there was likely a less extreme result in the first. 
Kahneman uses the outcomes of golfers on two consecutive days to illustrate the concept. If Golfer A performs above average on day 1 and Golfer B performs below average, spectators naturally cite factors like talent, ability, and skill for their respective performances. Bets are likely placed on the “talented” Golfer A, with confidence that his inherent qualities will produce a second, above-average score.  
However, these beliefs fail to account for the critical role of chance; whether luck is good or bad, it produces outcomes above or below the average. If Golfer A had a particularly extraordinary performance on day 1, we can assume a he benefited from a significant amount of luck. 
Kahneman explains that based on regression principles, the “more extreme the original score–” whether good or bad— “the more regression we expect.” Evidently, golfers who perform extremely well on day 1 are very unlikely to perform as well or better on day 2. Statistically, it is highly probable that their performance decreases, becoming closer to the average. Kahnmen describes the phenomenon of regression as the “mathematically inevitable consequence of the fact that luck played a role” in the first outcome. 
Kahneman acknowledges that humans struggle to embrace the regression concepts due to our mind’s desire for causation. Also described as the narrative fallacy by economist Nicholas Nassim Taleb, the mind naturally creates a cause-and-effect story. Ultimately, we fabricate a reason behind results, failing to recognize the role of statistics and chance in outcome. This leads us to credit results with preceding actions rather than chance— promoting both overconfidence in our ability to predict and illogical, unconscious risk-taking. 


THE ILLUSION OF VALIDITY

Our system 1— the automatic, intuitive system— produces what Kahneman describes as “confidence as coherence.” By nature, System 1 jumps to conclusions quickly, and System 2 follows up with a story rationalizing those conclusions.
The confidence we have in those conclusions depends on the coherence of the story that Systems 1 and 2 construct. The greater the coherence, the more likely our conclusions become solidified beliefs. However, coherence often occurs with little to no evidence, making our confidence and beliefs baseless. Our tendency to remain certain in the absence of evidence is a cognitive bias known as the illusion of validity.  
In a firsthand example of the pitfalls to confidence as coherence, Kahneman details his service as a psychologist in the Israeli Army. He was tasked with evaluating soldiers for Officer Training, predicting if their qualities would result in successful leadership. To do so, soldiers underwent a challenging group exercise that aimed to portray qualities like leadership, team-orientation, and resilience. After observation, the impressions of each soldier’s qualities were summarized with a numerical score.  
According to Kahneman, when “multiple observations of each candidate formed a coherent story, [the psychologists] were completely confident” in their evaluations, believing that their predictions “pointed directly to the future.” he also explained that because the impressions were “generally coherent and clear, … formal predictions were just as definite.” He explained that the evaluating team did not often feel doubt or uncertainty, and were rather convicted in their beliefs. 

Despite their confidence, Kahneman and his team realized that their predictions were just slightly better than blind guesses, and “were largely useless.” Even so, he was alarmed that “the global evidence of [their] previous failure” did not influence their confidence in continued performance evaluations. Even though Kahneman realized his assessments were statistically irrelevant, he believed that each specific, individual prediction was strong and valid. This, he described, is the illusion of validity. 

The illusion of validity is widespread in the stock market, which Kahneman illustrates by describing a UC Berkely study on decision-making in trading. The study analyzed 10,000 traders and 163,000 trades over seven years to identify the outcomes and patterns of buying and selling. Not surprisingly, the traders were confident in their predictions of future prices: they expected that the stocks they chose to buy would perform better than the stocks they chose to sell.
However, this notion was found untrue. Although some individuals performed very well and some very poorly, on average, shares sold outperformed shares bought by 3.2% per year. In addition, individuals who traded the least performed better, while active traders performed much worse. In further research, the study found that this was partly attributed to a tendency to sell “winners” and hold losses. Because recent winners outperform recent losers in the short-run, traders “sell the wrong stocks.”

Similar studies also analyze “persistent achievement,” the continuous success that may dictate a professional investor’s career. The test aimed to uncover whether or not individual success was a result of skill or luck; if a year’s worth of differences were based on chance, “the ranking of investors and funds will vary erratically and the year-to-year correlation will be zero.” If skills were a factor, rankings would remain more stable over time. Results confirmed the role of luck: the year-to-year correlation between funds and success is just greater than zero.  
Kahneman’s argues that, “in highly efficient markets … educated guesses are no more accurate than blind guesses.” To further explore the factors of skill vs chance, he computed the correlation coefficients between rankings of 25 mutual fund advisers for pairs of years (year 1 with year 2, and so on). The results had a similar conclusion. The average of the correlations was 0.01— virtually zero. Kahneman described the results as “what you would expect from a dice-rolling contest,” rather than a showcase of skill. 
Kahneman acknowledges that traders do rely on high-level, specific skills. However, they are clearly not the only determinant of success, and traders continue exercising confidence in their choices. Kahneman explains that “[our] own experience of exercising careful judgement [is] far more compelling… than an obscure statistical fact.” We are unable to process statistics that challenge our foundational beliefs, extremely evident in the finance industry. Contrary to accepted economic theory, traders continue to believe they can out-predict the market. 


LOSS AVERSION 

Another tendency Kahneman explores is loss aversion and its relationship with risk policies. Loss aversion describes the asymmetrical reactions between perceived “gains” and “losses.” The negative emotional effect of a loss is greater than the positive emotional effect of an equal-sized gain. However, aversions to loss expose significant logical inconsistencies in human decision-making. 
Kahneman cites the following, widely used two-question scenario to highlight the irrational decision-making that stems from a combination of loss aversion and the automatic System 1. 

Decision 1: choose between
	A: a sure gain of $240
	B: 25% chance to gain $1,000 and 75% chance to gain nothing 

Decision 2: choose between
	C: a sure loss of $750
	D: 25% chance to gain $1,000 and 75% chance to gain nothing 

Nearly everyone chooses option A and option D, displaying how loss aversion influences our perceptions of risk. In decision 1, decision-makers are risk-averse; they opt for the sure gain rather than risking a potential loss. However, decision 2 often creates risk-seeking behavior. The idea of a sure loss is more emotionally painful, so individuals choose to risk even more due to the small probability of losing nothing. 
According to Kahneman, the mental evaluation of “sure gain and sure loss is an automatic reaction of System 1,” which reacts before System 2 can logically evaluate the different risk profiles. In doing so, one would find that the expected value of options B and D is actually greater than that of A and D. Effectively, loss aversion illustrates our “tendency to be risk-averse in the domain of gains and risk-seeking in the domain of losses,” exposing the inability to be rational consistently. In addition, this pattern of risk taking can be costly as it fails to maximize the potential expected value. 


PROSPECT THEORY

Kahneman’s Prospect Theory was developed alongside mathematical psychologist Amos Tversky in 1979. Prospect theory introduced the idea of reference points to concepts of Bernoulli’s theory and loss aversion. 
Bernoulli’s theory relates “psychological intensity to the physical magnitude of the stimulus” and states that our choices are based on psychological values of outcomes (utilities) rather than dollar amount. Bernoulli also proposed the idea that diminishing marginal value of wealth explains our aversion to risk. 

In Prospect Theory, a reference point is considered to determine the utility of a gain or loss. Reference points indicate “the earlier state relative to which gains and losses are evaluated” rather than simply state of wealth. In addition, System 1’s evaluation of gains and losses is “relative to a neutral reference point,” the “adaptation level.” Outcomes better than the reference point are perceived gains, and outcomes worse than the reference point are perceived losses. Of course, the idea of loss aversion remains, and losses carry more weight than gains. 
Kahneman acknowledges that Prospect Theory’s fails to incorporate anticipation and emotion into shifting mental reference points. Prospect Theory automatically assigns reference points with a value of zero, but mental shifts in one’s reference point also impacts our perceived value of gains and losses. Kahneman illustrates this with two options:

A: one chance in a million to win $1 million
B: 90% chance to win $1 million and 10% to win nothing 
Winning nothing is a possible outcome in each case. Winning nothing should be the reference point, and prospect theory would then assign the same value of zero for both options. However, failing to win $1 million dollars in B is much more disappointing than failing to win in A. This is because the high probability sets up a new mental reference point, and winning nothing feels like a large loss. 
Kahneman notes that “richer and more realistic assumptions” do not always lead to more successful theories, evident in the omissions of his own. Even so, Prospect Theory conveys how humans make illogical decisions “guided by the immediate emotional impact of gains and losses, not by long-term prospects of wealth and global utility,” undermining the rational-agent assumption. It served as a strong addition to utility theory through loss aversion and reference points, aiding in more robust observation predictions. 
Next
Next

Opinion: Reliance on the Phillips Curve Must Adapt With our Modern Economy