Decisions, decisions

To quote a former Secretary of Defense notable for a lack of insight and introspection, there are always ‘unknown unknowns’ when going into a situation. But a retrospective analysis sometimes shows clear signs of error, hubris, or willful incompetence leading us to ask: “what were you thinking?”

From the world of backcountry skiing comes the term ‘heuristic traps‘, coined to describe how humans become ensnared in a montage of avoidable mistakes. Such traps seduce us into accepting risks unwisely. The six key characteristics of such traps related to outdoor adventures can be found in the websites of Wilderlife magazine (New Zealand) and this outdoors shop in the UK.

Avalanches are not the only way to get into trouble outdoors, but in popular backcountry skiing areas, avalanche conditions are monitored closely, making it possible to check for specific signs of risk prior to the accident. The reference work on this was done by Ian McCammon in “Heuristic Traps in Recreational Avalanche Accidents: Evidence and Implications” (Avalanche News, No. 68, Spring 2004).

McCammon observes that we are prone to skew our judgement because other humans are around. Studying groups of skiers involved in avalanches, McCammon found that risk ‘exposure’ was more likely when: experts were leading groups on familiar terrain and ignored evident risks; groups were led by someone having avalanche awareness but little training or experience; groups of both men and women; and groups where expertise or effective decision-validation rules were absent.

Given a choice, most of the avalanche victims in this study apparently opted for the quick decision tool, even though it was not universally correct. Thus the challenge for educators is to offer practical alternatives to heuristic traps.

Ian McCammon, “Heuristic Traps in Recreational Avalanche Accidents”

People fall into heuristic traps because ‘knowledge-based decision strategies’ (in McCammon’s words) require careful evaluation and judgement, but we tend to like fast, simple decisions. It should be noted he focused on accidents taking place where multiple signs (sometimes literal) of avalanche risk were present. The preponderance of objective evidence should have been clear.

Below I’ve reinterpreted McCammon’s ‘traps’ or warning signs for more general application:

  • Familiarity or prior knowledge contributes to normalizing the abnormal (‘I’ve seen things similar to this and it’s always been OK’).
  • Reluctance to give up on an objective causes us to ‘stay the course’ (good money goes after bad).
  • Fear of looking like a wimp, particularly men.
  • Tendency to believe someone who seems sure of themselves, or a drive to behave decisively (we like to leave decisions to someone else, leaders fail to seek input/consensus, someone feels compelled to be decisive) .
  • Seeing others do it means it must be OK.
  • Competitive instinct roused by scarcity (‘this is the last chance’) or bragging rights (‘we can’t let them be first’).
  • Our excitement/anxiety regarding performance is exaggerated in groups.

As rephrased as above, heuristic traps seem logical and commonsensical. Attention, status, and group connections are resources we prioritize all the time, but human habits and frailty can make our choices unlucky.

McCammon’s study is compelling because of the consequences for choosing wrongly. Studying the decision-making process prior to an avalanche accident is thrilling in the way one might read about the lead-up to an airplane crash or the wildly inappropriate invasion of another country. Stories of this type tend to unfold as a series of missed cues and opportunities.

In 2012 there was a graphic article in The New York Times called “Snowfall“. Outside did one on a 2013 avalanche in Loveland Pass, Colorado. And then another piece appeared in the NYT called “What I Learned in Avalanche School“, specifically about ‘heuristic traps’.

Even though several members of the eight-person party had previously triggered avalanches and even, in one case, witnessed a fatality, they had ignored the many obvious dangers, suggesting that these skiers had become dazzled by their own expertise, falsely brightened by luck.

“What I Learned in Avalanche School” by Heidi Julavits

In McCammon’s study, group size played a role, with groups of four doing statistically better than groups of two or 6-10. One might infer that groups need a minimum number of viewpoints to see risks clearly, but decision-making becomes subject to group pressure in medium-size parties.

Interesting stories to read, not so much fun to live through.

Take a beat to…

Trust but verify

Russian proverb

Consider the heuristic trap as an epistemic problem– how do we actually know what we think we know? It’s all about verifying or validating knowledge, judgement, and belief.

A guest blog post on Wildsnow offers actionable ways to improve group decisions while skiing in groups. It might not check all the boxes that Ian McCammon would recommend, but one of the key things it does is open up decision-making to a form of validation by members of the group. I think it’s practical for doing on snow in a group, and certainly better than not reading the team at all.

Whether heuristic or epistemic, this is about what goes on inside our heads, so naturally it’s studied by psychologists. A couple of examples:

Beyond Fiasco: A Reappraisal of the Groupthink Phenomenon and a New Model of Group Decision Processes” (Psychological Bulletin 1993, Vol. 113, No. 3, 533-552) reviews a model of vulnerabilities in ‘groupthink’ developed by Irving L. Janis. Vulnerabilities in group decisions are: group cohesiveness, lack of organized leadership, and provocative situational context (something that drives urgency).

Groupthink refers to a deterioration of mental efficiency, reality testing, and moral judgment that results from in-group pressures

Janis, I. L. (1982). Groupthink (2nd ed.), pub. by Houghton Mifflin

Conditions for Intuitive Expertise: A Failure to Disagree” (American Psychologist, 2009) is a kind of ‘shoot-out’ between two models of expert judgement: ‘heuristics and biases’ or ‘naturalistic decision making’. Among the conclusions: humans have the best chance to acquire expert intuition in conditions with ‘high validity’ (there’s clear and predictable linkage between external clues and the outcome of actions); experts often cannot articulate their intuition; and it’s difficult for experts to know when conditions have changed and their expertise no longer has validity. One of the authors is the Nobel-prize winner Daniel Kahneman.

Subjective confidence is therefore an unreliable indication of the validity of intuitive judgments and decisions.

“Conditions for Intuitive Expertise: A Failure to Disagree“, Daniel Kahneman and Gary Klein, American Psychologist September 2009

Clearly there’s a tension between what we believe about our judgement and the methods we use to validate that belief. Heuristic traps don’t exactly match what I know of Kahneman’s ‘fast vs. slow thinking’, but both are about how we determine the urgency of risk, and make guesses good and bad.

There are seductive and reductive motivations at the base of the heuristic trap, fitting it into a zone between ‘fast’ and ‘slow’ thinking. As the word ‘heuristic’ implies, we learn to reduce the complexity of decisions for the sake of efficiency. And the more seductive the answer, the greater the motive to confirm the least costly-seeming judgement.

Imperfect by nature

Humans devote about 20% of our energy just to supporting our brains. And what do we get for it? McCammon and Kahneman imply that people can be bad judges of how well we’re thinking. All that, plus taking time to ponder (Kahneman’s ‘slow thinking’) is costly and people don’t like doing it, especially when there’s fresh powder in that bowl beyond the dangerously overhanging cornice. As McCammon drily notes in his conclusion “most of the avalanche victims in this study apparently opted for the quick decision tool, even though it was not universally correct.”

McCammon’s point is that the failure to recognize how we’re reacting to the people around us can be a hazard just as real as an avalanche condition.

Our ability to act as a group organism can enhance our survival through the addition of individual skills. At the same time that advantage is subverted by the an inability to know the state of our own minds, or the extent to which our reasoning is affected by the mere presence of others. We want to make an impression on others and we’re willing to cut corners, statistics be damned.

Did you play well?

Our desire not to disappoint others or ourselves can and does backfire. Whether at the group or personal level, statistics tells us that in some percentage of cases we’ll fail. In the case of backcountry skiers in an avalanche zone, the cost could be death. For a professional ski racer, prize money is involved but the most painful loss is to self-esteem. Sometimes we put too much weight on having a specific outcome, or outplaying a rival.

Alexander Bolshunov upset after losing gold medal in 50k classic in Oberstdorf, March 2021
Alexander Bolshunov after failing to win the World Championship 50k (via Youtube)

Competition is at its most dramatic with a heightened risk of failure. It is regrettable that reality TV and sporting events purposely dial up the potential. An alternative might be to practice a process of accepting a range of outcomes. For example, Jessie Diggins and the US Ski Team try to emphasize how well the team is working even if it didn’t result in a podium. Having holistic goals seems like a sound backup plan.

Not to go all Buddhist about it, but desire does seem to be at the root of unhappiness. For me there’s a unique form of satisfaction when I’m skiing in the moment without expectation for an outcome. That’s not to say any random outcome is fine, but I’ve chosen a scenario for skiing in which a range of outcomes is possible. I can be happy with it as circumstances arise, or as feelings dictate.

The winter that was, 2020-21

In brief

This season was shorter but significantly better overall than last year’s. The average number of weeks in the touring center season was at 13-1/2 this year vs. 14 for 2019-20. Season length was pretty consistent even when excluding the downstate and flatland regions.

More revealing is the calculation of ‘average quality of skiing’: 70%. The average goes up to 80% when restricted to areas further north and at elevation. Recalculating last year’s information, the quality of the 2019-20 season was a dismal 55% overall, while the northern and elevated areas had a quality of 73%.

Takeaways:

  • Mt. van Hoevenberg wanted to make a statement about their snowmaking and new competition trails this year, using them to start earlier and close later than anyone. For that reason I measured season start and end using the other touring centers.
  • Snow came late and left early, but distribution was southward of last year, meaning that we in the downstate benefitted, somewhat at the expense of the far north country.
  • The snowpack is thinner in the mountains than last year, so springtime skiing in the high country will be shorter and more restricted despite a mid-April snowstorm this week.
  • This season’s skiing quality was better than last year’s, even for the north country. I believe this might be associated with the polar vortex (see next point).
  • February brought a wobble of arctic air southward across the northeast, allowing the snowpack to hold up much better than we’re used to. We should probably not count on this being ‘normal’, as it’s a sign of Arctic warming.

Hail nerdopolis

In reviewing last year’s spreadsheets, I decided this year to simplify judgements of ‘skiing quality’. I now use accumulated ‘points’ from posted conditions as a percentage of the maximum possible points for a given touring center’s season.

Points were assigned according to skiability status (seen in ‘State of the touring centers’); maximum for each status check is ‘good skiing’ which translates to 3 points, ‘skiable’ at 2, and ‘minimally skiable’ at 1 point.

For example, Prospect Mountain accumulated 87.5 points for 34 status checks I did between their season open and season end. The half-point is for a ‘TBD’ where conditions had not been updated recently. The max possible score for 34 checks is 102. No area or touring center will have ‘good skiing’ conditions at every check, so I discount the maximum possible points by 10%, making Prospect Mountain’s ‘100% quality’ equivalent to 92 points; 87.5/92 = a quality score of 95% for their season.

So how did our downstate/NY metro touring centers do? The quality score necessarily includes periods during the season when trails are closed. You can see what that does to quality of skiing in a poor snow season like 2019-20. At the same time you can see variability that appears to be the direct result of grooming, as with the difference between Mohonk and Minnewaska this year compared to last.

Touring center nameQuality for 2020-21Quality for 2019-20
Fahnestock43%7%
High Point46%13%
Minnewaska61%18%
Mohonk38%18%

Taking the variability of weather into account, areas that want to be viable would hope for a multiyear average quality of 50%. Fortunately for us downstate, a huge population means the NYC metro can support a distorted set of values. Downstate might be able to support a decent touring center with season quality as low as 30%. Further north a commercial area or resort should be situated with climate and grooming to hit 60%.

Maintaining 80% or above takes luck with weather, faithful groomers, and money. I’m going to be looking out for whether the kind of sorting we’ve seen in other parts of the economy takes place with touring centers and resorts.

In 2020 Mountain Meadows and Windblown called it quits for cross-country, after a season when Windblown had had a quality of 28%, and Mountain Meadows 54%. Looking at this season, Dexter’s could decide to thrown in the towel sometime; Mohonk wasn’t so dedicated to grooming compared to Minnewaska; and Viking Nordic’s quality was a bit sketch compared to the nearby Wild Wings.

If you care to peruse a scatterplot of average skiing quality by length of season in weeks, here’s a graphic for this past season. I’d love to know if my numbers are validated by collective experience of the skiers, but that could be a whole other project. Sounds cool.

Scatterplot of ski touring center skiing quality and length of ski season

Other than a couple of commentary items I’m working on, that’s about it for the season. Thanks for reading.

Last licks 4/5

As of today (Monday) Mt van Hoevenberg is hanging tough with their 5k’s of the New Competition, Bobcat, and Mini loops. They say ‘Ski it while you can’ and advise “frozen granular base groomed this morning”, and “Spring Conditions will likely return this afternoon.”

Barkeater Trails Alliance’s final message for the season:

And unless something dramatic changes, this will be the last report of the season.  It all seemed to end too quickly, but at least we can remember those solid weeks of perfect powder conditions earlier in the winter.  

Barkeater Trails Alliance ski conditions 4/1/21

The graphic below shows the scant snow depth, and the logical tendency for touring centers to cluster at or near where natural geography and local climate favors a better snowpack

Snow depth northeast U.S., Apr. 5 2021, with ski touring centers marked
Snow depth northeast U.S., Apr. 5 2021 (NOAA) with ski touring centers marked