I saw this cartoon on the web recently.  It points out an important truth:  we (humans and dinosaurs alike, I suppose) take too few risks, rather than too many.

I don’t know whether the dinosaur did the research, but I have.  The results are provocative.  Humans misanalyze risks.  It is not just that we have a difficult time calculating risks, understanding percentages, and (as a culture) understanding mathematics.  We do, but there are greater challenges at work, challenges that have instructive consequences.

Loss Aversion.  One key notion is so-called loss aversion.  We tend to prefer avoiding losses over acquiring gains.  In his 1996 book, Against the Gods – The Remarkable Story of Risk, Peter Bernstein quotes Amos Tversky:

“‘‘It is not so much that people have uncertainty—but rather, they hate losing.”  Losses will always loom larger than gains.  Indeed, losses that go unresolved—such as the loss of a child or a large insurance claim that never gets settled—are likely to provoke intense, irrational, and abiding risk-aversion.

Peter L. Bernstein, Against The Gods – The Remarkable Story of Risk at 274 (1996).

If this is true (and scores of studies have shown that it is), the principle leads to important insights.  We are inherently conservative.  Where we may refuse to take a risk that could secure major gains because we fear some loss, even where the possible loss may be disproportionately small relative to the potential gain.

Fear of What’s New.  There is another flaw in most peoples’ risk assessment processes:  we fear what’s new.  As David Ropeik and George Gray noted in their illuminating 2002 work, Risk, which addresses the actual likelihood of bad events:

Most people are more afraid of risks that are new than those they’ve lived with for a while.  In the summer of 1999, New Yorkers were extremely afraid of West Nile virus, and mosquito-borne infection that killed several people and that had never been seen in the United States.  By the summer of 2001, though the virus continued to show up and make  a few people sick, the fear had abated.  The risk was still there, but New Yorkers had lived with it for a while.

David Ropeik and George Gray, Risk at 17-18 (2002).

Ropeik and Gray list lots of other anomalies of risk in their book, some of which are very entertaining.  The bottom line is that we don’t do very well with risks.

Even Experts Predict Poorly.  There is a broader issue here: when gathering data to make predictions, even experts tend to take unreasonable shortcuts.  Richards Huere, writing about Intelligence Analysis from years with the Central Intelligence Agency, had this to say:

I would suggest, based on personal experience and discussions with analysts, that most analysis is conducted in a manner very similar to the satisficing mode (selecting the first identified alternative that appears “good enough”).46 The analyst identifies what appears to be the most likely hypothesis—that is, the tentative estimate, explanation, or description of the situation that appears most accurate. Data are collected and organized according to whether they support this tentative judgment, and the hypothesis is accepted if it seems to provide a reasonable fit to the data. The careful analyst will then make a quick review of other possible hypotheses and of evidence not accounted for by the preferred judgment to ensure that he or she has not overlooked some important consideration.

This approach has three weaknesses: the selective perception that results from focus on a single hypothesis, failure to generate a complete set of competing hypotheses, and a focus on evidence that confirms rather than disconfirms hypotheses.

Heure, Richards, The Psychology of Intelligence Analysis (1999) at 44.

I have taken a number of personal lessons from this.  I try to examine the basis upon which I am making predictions.  To the extent I find myself erring on the side of caution, or erring on the side of sticking with what’s familiar, I question that judgment more carefully.

More basically, however, as I will discuss in other posts (and have raised in the past), I have set out to take more risks.  My research has convinced me, fundamentally, that too many people stay within comfort zones.  They may succeed, but that success is unreasonably bounded.  Too many of us – especially those with perfectionistic tendencies – live well below their potential because they won’t (because of predictive problems described above) take risks that would lead to greater gains.

A new post by Jeffrey Tang suggests one way to solve this problem:  begin with what he calls the “low-risk start,” and then build from there.  Sound advice.  My advice is slightly different:  make more mistakes.  Risk significant personal or professional failure at least a couple of times per week.  You will learn, and you will live, much more than you do now.