It’s a far-reaching piece that calls into question the strength of existing information and the validity of its use in forecasting changes.
The book’s subject matter is so broad, and its implications so varied, that no summary could ever really do it justice.
As such, I shall only attempt to distill its most central theme. The opening few pages offer a good place to start:
“Before the discovery of Australia, people in the whole world were convinced that all swans were white, an unassailable belief as it seemed completely confirmed by empirical evidence. The sighting of the first black swan might have been an interesting surprise for a few ornithologists (and others extremely concerned with the colouring of birds), but that is not where the significance of the story lies. It illustrates a severe limitation to our learning from observations or experience and the fragility of our knowledge. One single observation can invalidate a general statement derived from millennia of confirmatory sightings of millions of white swans. All you need is one single (and, I am told, quite ugly) black bird.”
Many white swan sightings led to a general theory that all swans were white. Every subsequent sighting confirmed and enforced this belief. This continued for thousands of years, with each new sighting adding a new data point in support of the theory. The theory grew stronger and belief in it trended towards the absolute.
And then a black swan was spotted.
That one sighting disproved a theory built on a millenia of confirmatory evidence.
The value of one disconfirmatory sighting was greater than the value of a million confirmatory sightings.
The implication is significant:
“Black Swan logic makes what you don’t know far more relevant than what you do know.”
Clearly we we should embrace the fact that we don’t know everything. We should try to uncover the unknown unknowns. We should hunt for information which falsifies our beliefs and be wary of that which confirms them.
In other words, we should try to find the ‘black swans’.
Taleb defines these ‘black swans’ with the following three criteria:
“First, it is an outlier, as it lies outside the realm of regular expectations, because nothing in the past can convincingly point to its possibility. Second, it carries an extreme impact (unlike the bird). Third, in spite of its outlier status, human nature makes us concoct explanations for its occurrence after the fact, making it explainable and predictable.”
Clearly, despite our best efforts to seek them out, knowledge-changing, ‘black swans’ are inherently extremely difficult to uncover or predict.
This is known as the problem of induction. Confirmatory evidence lulls us into a false sense of security. It builds an unjustified sense of surety. We believe that what has happened before will continue to happen in the future. The more something happens in the past, the more we believe it will continue to happen. The turkey who gets fed every morning becomes increasingly confident that this will continue to occur. All evidence points in the same direction. And it does, right up until it doesn’t.
This is why even experts fail to forecast with any real degree of accuracy. They construct their predictions based on a history of stable, consistent and well aligned data, completely oblivious of a huge disruptive event lurking just around the corner.
For example how many experts correctly foresaw the 1987 stock market crash? How many predictions had been made that had failed to account for such a seismic event?
And when one comes, the experts post-rationalise it. They claim it was predictable and expected. They fall for the narrative fallacy. They reduce a chaotic, tightly coupled and complex set of events down into a simple and digestible, but woefully inadequate, explanation.
To avoid black swan traps we need to rethink how we assess information, construct opinions, and make predictions. No small task. But we can start with a basic principle of critical thinking: know that you don’t know.