Saturday, August 22, 2009

Black Swan Part I

Just started reading Nassim Taleb's "The Black Swan: The Impact of the Highly Improbable." I picked the book up on Wednesday on my return trip from Los Angeles and talking with Frank. I was able to get through part one on the flight. This isn't so much a book review, as a discussion of how reading this book impacts thoughts on systemic approaches to leadership.

A "Black Swan Event" is something that is rare (specifically outliers), has extreme impact (changes world views), and has retrospective (but not predictive) predictability. In essence, Black Swan Events are those rare things that we never really thought would happen (because if we really thought that, we would have prepared), has a great effect upon us, and in retrospect we feel that the event was obvious in its cause and retrospective predictability. Examples given include: Europeans sighting their first ever black swan in Australia, 1980's market crash, 9/11, World War II, and others (the 2008/2009 "recession" could be viewed as a black swan event also). Taleb conjectures that the rate of occurrence of black swan events will increase as our systems become more complex, interconnected, and interdependent.

Taleb does an outstanding job in part one explaining the interplay between humans, statistics (randomness), and our limited ability to handle things that haven't happened before. This manifests itself in his addressing common logical fallicies that people engage in while talking about statistical concepts. For example, any cancer patient should know the difference between "No Evidence of Disease" and "Evidence of No Disease". The former is what doctors are able to determine after cancer treatments, the later is what an optimistic patient may (falsely) believe after being told there is no evidence of the cancer. Of course, the truth is we can never be sure that something doesn't exist, but we can say that we found no evidence that it does.

This brings us to our first concept from the book, that many acquisition program managers believe that they can determine that their program is healthy, while they really should be trying to determine if there is no evidence of severe problems. From talking with several very transactional oriented program managers, many state that their programs are healthy because "nobody has shown me any problem". And then a black swan event will occur, something that the transactional PM wasn't watching for in their carefully crafted list of risk items. Unfortunately, the PM in this case is likely to end up in the same emotional roller-coaster as the cancer patient, learning that "recurrence" really means that we didn't see the evidence earlier. Audit agencies will enter and determine that the black swan event was indeed predictable (but not admitting that the predictability is only with the benefit of hindsight), if only the PM had tracked some other metrics or made some other decisions.

In another example, Taleb discusses how at a conference on randomness at a casino that the casino indicated that none of their biggest 5 losses were due to anything their models had predicted. Indeed, problems such as kidnappings, attempted bombings, employees not sending in tax forms (hiding the forms), and others lost the casino more money than any other source, such as cheating. This brings up that although casinos understand and manage the risk of cheaters, whales (high limit gamblers), and other aspects around their principal statistical risk model very well, the real black swan events come from previously unexpected other sources.

To a transformational leader, obviously opening the aperture by developing subordinates to consider and lead proactive responses to those events that could be fatal to the project is a key idea. But the systemic leader must go further and identify ways to reward the team for identifying risks outside the principal project, learn how to handle information about risks with ultra-low probabilities, and restructure the network of of the program to more effectively contain or eliminate risks.

In our first paper (posted about here), we discussed the 1996 Ariane 5 launch failure. This failure can be ascribed to a black swan event of forms:

  • Before the launch, everyone thought the probability of a software defect really causing a loss of that mission was very low
  • During the boost phase, a software error caused the Ariane 5 to self-destruct (loss of mission) - truly an extreme impact on the mission
  • And lastly, in retrospect, the failure cause was obvious, as well as the way to prevent that type of failure in the future


I think that using the constructs presented by Taleb about conceptualizing the highly improbable (which strangely enough, may be very probable in highly complex, interconnected, and interdependent systems) may be a good mental skill for a systemic leader to know.

More about this book next week, as I get to part two.

No comments:

Post a Comment