Summary:

Annie Duke's 'Thinking in Bets' (TiB) is a book about making better decisions in the face of uncertainty, purportedly drawing on the author's experiences as a champion poker player. Though an entertaining read, full of anecdotes from the world of sports and poker, TiB is disappointingly short on novel insights into decision making, and often feels like a regurgitation of ideas from the well-known cognitive bias literature.

TiB's core focus is on redressing cognitive biases that plague decisions. In redressing these biases, TiB completely ignores positive improvements to the decision making process (for example how to weigh different conflicting parameters, or how to go about gathering data for decisions), particularly in the face of incomplete or noisy information inputs. As a result, TiB does not offer any new ideas for readers familiar with the LessWrong/ Rationalist community ethos. Several ideas described in high-level terms in TiB would also have benefited from a fuller acknowledgment and discussion of the Bayesian roots underlying them.

That said, I found the motif of the 'Decision Loop' in the book a helpful tool to diagnose shortcomings in my decision making process, and to classify the innumerable cognitive biases affecting decisions.

Decision Loop: Beliefs -> Decisions -> Outcomes -> Updated beliefs

At the heart of 'Thinking in Bets' is the idea of a Decision Loop, with striking parallels to the OODA Loop, describing how we make decisions based on our beliefs, which are in turn updated using the outcomes of previous decisions.

The Decision Loop outlines the life-cycle of a decision. Most decisions start from beliefs we hold – both about the state of the world, and about the effects of our actions on the world. These beliefs influence our decisions to act on the world in one way or another. However, the outcomes from our actions are also influenced by luck, in addition to the skillful action we took. Outcomes, in other words are a signal with some noise embedded. To learn from outcomes we need to first tell apart the signal from the noise, and then update our priors in the light of this signal. After this updating, we're ready for our next round of decisions. 'Thinking in Bets' discusses what can go wrong in each step of this cycle.

Errors in belief formation

We form beliefs without adequate vetting Duke argues that we are gullible when forming beliefs, and do not vet statements before believing them. She presents as evidence psychology studies where subjects are To be honest, I'm not entirely convinced by Duke's reasoning for this claim. Duke speculates that there's an evolutionary advantage to forming beliefs faster and less rigorously. Even ignoring the general 'just-so' nature of evo-psych arguments like this, if we were all this gullible, we wouldn't last too long out there. Even the studies cited seem to have participants evaluating decisions with nothing at stake. I would expect most people around me to be a lot more discerning in their beliefs when it comes to making real-world decisions.

We choose what we want to believe: Confirmation bias, Cognitive dissonance and ideological bias New information always filters through the prism of our existing biases and higher-order beliefs, and often ends up confirming what we already believe. We are often unable to process information that contradicts strongly held views of the world, and instead choose to ignore it. In effect, our beliefs formation is highly inaccurate, and reflects our biases and prejudices. In Bayesian language, our prior might be a bit too strong to allow new information to meaningfully update our beliefs.

Errors in weighing beliefs

Present bias We care more about our present selves (which want to binge a series instead of sleeping) than our future selves (which has an early meeting the next day). This makes our decisions sub-optimal for our future selves.

Recency bias More recent events get higher weights when we make decisions. Duke gives an example from poker where winning $1k earlier in the night, and losing $900 towards the end feels a lot worse than losing $900 first, and winning back $1K towards the end. Emotional over-reactions (to setbacks in life, work, and poker) are all examples of recency biases. All of these situation lead to decisions that are 'rash' or highly sensitive to the immediate present.

Errors in interpreting outcomes: Credit for me, but not for thee

Most outcomes that result from our decisions are a combination of skill and luck (good or bad). Learning from outcomes requires us to split this skill component from the luck component, a notoriously difficult task.

In general we give ourselves too much credit for positive outcomes, and too little blame for negative outcomes, a result of a natural bias for self-preservation. When it comes to others this dynamic flips: we assign too much blame to others and too little credit.

Unfortunately Duke does not offer much to help us perform the correct attribution. Duke's only suggestion is to treat our skill-luck split as a bet in itself, and to ask ourselves if we think the outcomes would replicate consistently.

Errors in updating beliefs given an outcome

Hindsight bias and Outcome bias Hindsight bias makes remote possibilities seem more likely. Outcome bias makes us evaluate decisions based on the actually realized outcomes, rather than the statistically expected outcomes. This makes lucky long-shots seem like good decisions, and near-miss sure-shots seem like poor decisions.

Confirmation bias, Cognitive dissonance and ideological bias again The human mind is more interested in feeling good than in being right. Similar to how we form beliefs in the first place, our observation of outcomes is also colored by our higher-level beliefs (such as political or religious affiliation).

All or Nothing error For some reason, Duke thinks that we suck at assigning fractional probabilities to outcomes. It's all or nothing for us every time. Things are either 100% luck or 0% luck, or 100% skill and 0% effort. There's nothing in between. Our lives are also 100% rotten and no good, without any intervening shades of gray. As a result, we're unable to accurately learn from an outcome. Updates to our beliefs are persistently biased one way or the other.

TiB ignores decision making in the face of incomplete information

My biggest complaint about the book is the near complete cop out on the promise made in its subtitle 'Making smarter decisions when you don't have all the facts'. I expected a discussion on evaluating whether the facts available are sufficient, and if not, how to go about addressing them. But nothing of that sort was forthcoming.

Chapter Summaries

Chapter 1: Life is poker, not chess

  • Outcomes of decision in life involve skill and luck (unlike chess which in theory has well-defined outcomes).
  • Making decisions requires us to learn from uncertain outcomes of those decision, and to make decisions in the face incomplete information.

Chapter 2: Belief formation and updating.

  • Our subjective beliefs are inputs into decisions.
  • However, these beliefs are not rigorously vetted when formed.
  • Strongly held beliefs are not updated appropriately.
  • Cognitive ability does not improve our ability to vet and update our beliefs.
  • Solution: Reconfigure out beliefs into probability distributions.
    • In the language of Bayesian statistics, beliefs are priors. Use a flatter prior to capture uncertainty, and a spikier prior where you're more certain.
    • Priors incorporate both the mean values and the variance around them.

Chapter 3: Updating beliefs in the face of new information

  • Ideally, we should learn from the outcomes of past bets/ decisions to re-calibrate our priors for the next decision.
  • But we fail in this process:
    • Mis-attribution of luck and skill.
    • Self-serving bias
    • Don't give credit to others
  • Solution: Consider the decision to attribute outcomes between skill and luck as a bet in itself. Would you expect the results to replicate

Chapter 4 and 5: Decision making and evaluation in groups

  • Groups can improve decision quality if they (1) Value accuracy, (2) are accountable to accuracy, and (3) incorporate diversity of ideas.
  • Confirmatory drift due to natural homophily and confirmation bias. Examples, judges and academia.
  • Principles for structuring a group
    • Communal data sharing
    • Impartial evaluation
    • Disinterested evaluation
    • Organized skepticism

Chapter 5: Questioning decisions with general public

  • Express uncertainty rather than conviction: "I could be wrong", "I'm not sure")
  • Start with a 'yes': 'Yes, I agree with you that X is likely. But in Y case…'
  • Confirm that the other party wants to evaluate their decisions. Don't be a sea-lion.
  • Validate their struggles and be future facing in evaluations. "How can we avoid this in the future?"

Chapter 6: Combating present bias, recency bias, hindsight bias and outcome bias

  • Present bias arises because we discount the well-being of our future selves too much. An Ullysses contract (names after the Greek hero who tied himself to a mast to avoid being tempted by the sirens) solves this by locking your present self into a particular action.
  • Time travelling — thinking about how you'd feel about certain decisions 10 minutes, 10 months, and 10 years down the line helps to combat recency and present bias and to gain perspective on what's really important, and what's emotional noise in your mind.
  • Mapping out future scenarios and even assigning subjective probabilities to each scenario helps to combat hindsight bias and outcome bias. Such probability mapping ensures that our decisions accounted for all available information and parameters, and was the best possible one given the information and trade-offs.

Further resources

  • LessWrong sequences on decision making, bias, epistemic confidence etc.
  • Richard McElreath's Statistical Rethinking is a good introduction to thinking like a Bayesian. The course helps your formalize the intuitive manner in which we process and update beliefs and information.