Confirmation bias

Nassim Nicholas Taleb, writing in The Black Swan:

“Cognitive scientists have studied our natural tendency to look only for corroboration; they call this vulnerability to the collaboration error the confirmation bias.

The first experiment I know of concerning this phenomenon was done by the psychologist P. C. Wason. He presented the subjects with the three number sequence 2, 4, 6, and asked them to try to guess the rule generating it. There method of getting us to produce other three number sequences, to which the experimenter would respond “yes” or “No” depending on whether the new sequences were consistent with the rule. Once confident with their answers, the subjects would formulate the rule. … with the correct rule was “numbers in ascending order,” nothing more. Very few subjects discovered it because in order to do so I had to offer a series in descending order (what the experimenter word say “no” to). Wason noticed that the subjects had a role in mind, but gave him examples aimed at confirming it instead of trying to supply series that were inconsistent with their hypothesis. Subjects tenaciously kept trying to confirm the rules that they had made up.”

Peter Cathcart Wason‘s 2-4-6 problem proved that we have a tendency to seek out evidence which confirms our ideas rather than that which falsifies them.

Alvin Toffler echoed a sentiment, and borrowed a fair amount from Leon Ferstinger‘s concept of cognitive dissonance, 10 years later.

Nate Silver writing in The Signal and The Noise:

“Alvin Toffler, writing in the book Future Shock in 1970, predicted some of the consequences of what he called “information overload”. He thought our defence mechanism would be to simplify the world in ways that confirmed our biases, even as the world itself was growing more diverse and more complex.”

Silver later sums up the bias succinctly:

“The instinctual shortcut that we take when we have ‘too much information’ is to engage with it selectively, picking out the parts we like and ignoring the remainder.”

Zipf’s law

Nassim Nicholas Taleb writing in his book Black Swan:

“During the 1940s, a Harvard linguist, George Zipf, examined the properties of language and came up with an empirical regularity now known as Zipf’s law, which, of course, is not a law (and if it were, it would not be Zipf’s). It is just another way to think about the process of inequality. The mechanisms he described were as follows: The more you use a word, the less effort for you will find it to use that word again, so you borrow words from your private dictionary in proportion to their past use. This explains why out of the sixty thousand main words in English, only a few hundred constitute the bulk of what is used in writings, and even fewer appear regularly in conversation.”

The problem of induction

Nassim Nicholas Taleb writing in his book Black Swan:

“The überphilosopher Bertrand Russell presents a particularly toxic variant of my surprise jolt in his illustration of what people in his line of business call the Problem of Induction or Problem of Inductive Knowledge.

How can we logically go from specific instances to reach general conclusions? How do we know what we know? How do we know what we have observed from given objects and events suffices to enable us to figure out their other properties? There are traps built into any kind of knowledge gained from observation.

Consider a turkey that is fed every day. Every single feeding will firm up the bird’s belief that it is the general rule of life to be fed every day by friendly members of the human race “looking out for its best interests,” as a politician would say. On the afternoon of the Wednesday before Thanksgiving, something unexpected will happen to the turkey. It will incur a revision of belief.

How can we know the future, given knowledge of the past; or, more generally, how can we figure out the properties of the (infinite) unknown based on the (finite) known? Think of the feeding again: what can our turkey learn about what is in store for it tomorrow from the events of yesterday? A lot, perhaps, but certainly a little less than it thinks, and it is just that “little less” that may make all the difference.

Let us go one step further and consider induction’s most worrisome aspect: learning backward. Consider that the turkey’s experience may have, rather than no value, a negative value. It’s learned from observation, as we are all advised to do (hey, after all, this is what is believed to be the scientific method). Its confidence increased as the number of friendly feedings grew, and it felt increasingly safe even though the slaughter was more and more imminent. Consider that the feeling of safety reached its maximum when the risk was at the highest! But the problem is even more general than that; it strikes at the nature of empirical knowledge itself. Something has worked in the past, until – well, it unexpectedly no longer does, and what we have learned from the past turns out to be at best irrelevant or false, at worst viciously misleading.”

The past cannot always predict the future.

No wonder economic predictions usually fail

Tournament effect

Nassim Nicholas Taleb writing in his book Black Swan:

Let me start with the economist Sherwin Rosen. In the early 80s, he wrote the papers about “the economics of superstars.” In one of the papers he conveyed his sense of outrage that a basketball player could earn $1.2 million a year, or a television celebrity could make $2 million. To get an idea of how this concentration is increasing – i.e., of how we are moving away from Mediocristan – considerate that television celebrities and sports stars brackets (even in Europe) get contracts today, only two decades later, where in the hundreds of millions of dollars! The extreme is about brackets (so far) 20 times higher than it was two decades ago!

According to Rosen, this inequality comes from a tournament effect: someone who is marginally “better” can easily win the entire pot, leaving the others with nothing. Using an argument from chapter 3 people prefer to pay $10.99 for a recording featuring Horowitz to $9.99 for a struggling pianist. Would you rather read Kundera for $13.99 or some unknown author for $1. So it’s looks like a 20 minutes, where the winner grabs the whole thing – and he does not have to win by much.

The Narrative Fallacy

Nassim Nicholas Taleb writing in The Black Swan:

“We like stories, we like to summarise, and we like to simplify, i.e., to reduce the dimension of matters. The first of the problems of human nature that we examine in the section … is what I call the narrative fallacy. The fallacy is a so stated with our vulnerability to overinterpretation and our predilection for compact stories over raw truths. It severely distorts our mental representation of the world; it is particularly acute when it comes to the rare event.”

Later:

“The narrative fallacy addresses our very limited ability to look at sequences of facts without weaving an explanation into them, or, equivalently, forcing a logical link, an arrow of relationship, upon them. Explanations bind facts together. They make them all the more easily remembered; they help them make more sense. Where is this propensity can go wrong is when it increases our impression of understanding.”

The world is complex. Even simple scenarios are difficult to predict. Few experts understand it. And yet we string cherry picked facts together to form nice, neat stories. Stories with beginnings, middles and ends. Causes and effects. We construct cohesion out of complexity. We aim for simple and achieve simplistic.

We shouldn’t stop. But we should understand the limits of our understanding.

 

Theory of complexity

In his book Black Swan, the essayist Nicholas Nassim Taleb provides a functional definition of complex domains:

“A complex domain is characterised by the following: there is a great degree of independence between its elements, both temporal (a variable depends on its past changes), horizontal (variables depend on one another), and diagonal (variable A depends on the past history of variable B). As a result of this independence, mechanisms are subjected to positive, reinforcing feedback loops.”

Nate Silver expands on this brief introduction with a more illustrative description in his book The Signal and the Noise:

“The theory of complexity that the late physicist Per Bak and others developed is different from chaos theory, although the two are often lumped together. Instead, the theory suggests that a very simple things can behave in strange and mysterious ways when they interact with one another.

Bak’s favourite example was that of a sandpile on the beach. If you drop another grain of sand onto the pile (…) it can actually do one of three things. Depending on the shape and size of the pile, it might stay more or less where it lands, or it might cascade gently down the small hill towards the bottom of the pile. Or it might do something else: if the pile is too steep, it could destabilise the entire system and trigger a sand avalanche.”

Just imagine the number of different ways that the sandpile could be configured. And just imagine the number of ways the falling grain of sand could hit the pile. Despite being such a simple object (a sandpile) the number of possible interactions between its constituent parts are innumerable. And each potential scenario would have a different result.

But of course a pile of sand containing thousands of irregular grains is complex. An simpler example would be the initial break in a game of pool. 16 spheres on a flat surface. But still, how many times would you have to break until every ball landed in the exact same positions?

These are complex systems.

Whilst Silver is quick to distinguish between complexity and chaos, it’s worth noting that Tim Harford is also keen to make a distinction. In his book Adapt he separates the concepts of complex systems and tightly coupled systems.

To put it simple, complex systems have a lot of possible, hard to predict scenarios. Some will destabilise the entire system, some won’t. Tightly coupled systems are always the latter.