I’ve got another excellent book recommendation: Risk Savvy: How to Make Good Decisions, by Gerd Gigerenzer. Prof. Gigerenzer directs the Max Plank Center for Human Development in Berlin, and has had a long and illustrious academic career. His current focus is on what he calls risk literacy: understanding risk, distinguishing risk from uncertainty, and, particularly important for physicians, being able to communicate risk in an understandable way.
Before explaining why I think this book is so important for us as physicians, I want to put it in some context. Back in 1989, Gigerenzer co-authored a fascinating history of probability, titled The Empire of Chance: How Probability Changed Science and Everyday Life. That book traces the history of probability from its roots in gambling, maritime insurance contacts, and life insurance underwriting through modern day scientific and medical applications. There’s particularly interesting discussion of the philosophy of probability–after all, it was not obvious at the dawn of the age of reason that some events in life are better understood in terms of probability than in terms of causal mechanism (or in terms of divine providence). There’s a current of contrariness in The Empire of Chance, in which Gigerenzer implies that we’ve fetishized frequentist probability to the detriment of good decision-making. What makes a p value of 0.05 so special?
In Risk Savvy, Gigerenzer develops this line of thinking much further, and sets it against the work of Kahneman and Tversky, which I’ve mentioned before in this blog. Kahneman and Tversky wrote extensively on the limitations of our heuristics–the mental shortcuts that we use to make judgments and decisions. They showed that, contrary to the claim that people are highly rational actors who make decisions (consciously or not) that comport with the laws of probability, we actually employ heuristics that lead us astray. See, for example my previous discussion of the representativeness bias as it relates to the diagnosis of acute vestibulopathy. The implication of their work is that we need to better understand probability in order to make accurate judgments and good decisions.
In this highly readable book (yes, I know: What resident has time for reading this kind of thing, especially in August? Trust me, though, you can read this one in just a few days), Gigerenzer makes a few counterarguments. First, decisions often need to be made quickly and we simply don’t have time to perform a bunch of complicated calculations to arrive at a mathematically correct decision.
Second, we’re often lacking much of the data we would need to make decisions based on mathematical equations even if we did have the time. This gets to the difference between risk and uncertainty. Risk can be quantified. The risk of symptomatic intracranial hemorrhage in a stroke patient given tPA is 6.4%. We can use that risk to make decisions about when to offer or not offer the drug, and patients can us it to decide whether to accept such treatment. Uncertainty is harder to quantify. We can faithfully explain the tPA data to our patients day in and day out, thinking that we’re making probabilistically correct decisions, and then one day a bottle of TNK somehow gets mislabeled as tPA and the patient dies of a massive brain hemorrhage. That kind of event, what Nassim Nicholas Taleb termed a black swan, is hard to plan for or quantify. It doesn’t happen often enough (thankfully) for there to be valid frequentest data about its occurrence.
In Risk Savvy, Gigerenzer, contra Kahneman and Tversky, argues that heuristics are often the better way to make decisions under conditions of uncertainty, and he lists a bunch of useful ones. Pertinent to medicine, he shows that brief decision tools often outperform logistic regression models for making diagnoses or prognosticating. For example, a simple algorithm using age, BP, and pulse (I forget the details) was better at predicting death from MI than much more complicated equations. Age and NIHSS are by far the most powerful predictors of stroke outcome, notwithstanding the plethora of abstracts at the annual conference touting the latest quadratic formulae.
When a mathematical approach is called for (and Gigerenzer isn’t arguing that it never is–just that there are different approaches to different problems), he shows simple ways (using natural frequencies) to employ Bayesian reasoning to, for example, interpret test results and counsel patients. These two chapters on medicine are very good, and if you do get a hold of the book and have limited time or interest, I recommend them highly.