The problem of induction

Nassim Nicholas Taleb writing in his book Black Swan:

“The überphilosopher Bertrand Russell presents a particularly toxic variant of my surprise jolt in his illustration of what people in his line of business call the Problem of Induction or Problem of Inductive Knowledge.

How can we logically go from specific instances to reach general conclusions? How do we know what we know? How do we know what we have observed from given objects and events suffices to enable us to figure out their other properties? There are traps built into any kind of knowledge gained from observation.

Consider a turkey that is fed every day. Every single feeding will firm up the bird’s belief that it is the general rule of life to be fed every day by friendly members of the human race “looking out for its best interests,” as a politician would say. On the afternoon of the Wednesday before Thanksgiving, something unexpected will happen to the turkey. It will incur a revision of belief.

How can we know the future, given knowledge of the past; or, more generally, how can we figure out the properties of the (infinite) unknown based on the (finite) known? Think of the feeding again: what can our turkey learn about what is in store for it tomorrow from the events of yesterday? A lot, perhaps, but certainly a little less than it thinks, and it is just that “little less” that may make all the difference.

Let us go one step further and consider induction’s most worrisome aspect: learning backward. Consider that the turkey’s experience may have, rather than no value, a negative value. It’s learned from observation, as we are all advised to do (hey, after all, this is what is believed to be the scientific method). Its confidence increased as the number of friendly feedings grew, and it felt increasingly safe even though the slaughter was more and more imminent. Consider that the feeling of safety reached its maximum when the risk was at the highest! But the problem is even more general than that; it strikes at the nature of empirical knowledge itself. Something has worked in the past, until – well, it unexpectedly no longer does, and what we have learned from the past turns out to be at best irrelevant or false, at worst viciously misleading.”

The past cannot always predict the future.

No wonder economic predictions usually fail

The three different types of error

Tim Harford writing in his book Adapt:

James Reason, The scholar of catastrophe who uses Nick Leeson and Barings Bank as a case study to help engineers prevent accidents, is careful to distinguish between three different types of error. The most straightforward are slips, when through clumsiness or lack of attention you do something you simply didn’t mean to do. In 2005, a young Japanese trader tried to sell one share at a price of ¥600,000 and instead sold 600,000 shares at the bargain price of ¥1. Traders call these slips ‘fat finger errors’ and this one cost £200 million.

Then there are violations, which involve someone deliberately doing the wrong thing. Bewildering accounting tricks like those employed at Enron, or the cruder fraud of Bernard Madoff, are violations, and the incentives for them are much greater in finance than in industry.
Most insidious are mistakes. Mistakes are things you do on purpose, but with unintended consequences, because your mental model of the world is wrong. When the supervisors at Piper Alpha switched on a dismantled pump, they made a mistake in this sense. Switching on the pump was what they intended, and they followed all the correct procedures. The problem was that their assumption about the pump, which was that it was all in one piece, was mistaken.

Group think

Tim Harford writing in his book Adapt:

Irving Janis’s classic analysis of the Bay of Pigs and other foreign policy fiascoes, Victims of GroupThink, explains that A strong team – a ‘kind of family’ – Can quickly forward into the habit of reinforcing each other’s prejudices out of simple teen spirit and a desire to bolster the group.

This reminds me of the concept of Echo Chambers, where a person’s beliefs are strengthened when they are exposed only to opinions that align with their own.

To break free of GroupThink and Echo Chambers we need to seek out contradictory, contrarian or disconfirmatory opinions. We need to try and falsify our current position in order to progress towards a truth.

Harford continues:

Janis details the way in which John F. Kennedy fooled himself into thinking that he was gathering a range of views and critical comments. All the while his team of advisors where unconsciously giving each other a false sense of infallibility. Later, during the Cuban Missile Crisis, Kennedy was far more aggressive about demanding alternative options, exhaustively exploring risks, and breaking up his advisory groups to ensure that they didn’t become too comfortable.

The Narrative Fallacy

Nassim Nicholas Taleb writing in The Black Swan:

“We like stories, we like to summarise, and we like to simplify, i.e., to reduce the dimension of matters. The first of the problems of human nature that we examine in the section … is what I call the narrative fallacy. The fallacy is a so stated with our vulnerability to overinterpretation and our predilection for compact stories over raw truths. It severely distorts our mental representation of the world; it is particularly acute when it comes to the rare event.”

Later:

“The narrative fallacy addresses our very limited ability to look at sequences of facts without weaving an explanation into them, or, equivalently, forcing a logical link, an arrow of relationship, upon them. Explanations bind facts together. They make them all the more easily remembered; they help them make more sense. Where is this propensity can go wrong is when it increases our impression of understanding.”

The world is complex. Even simple scenarios are difficult to predict. Few experts understand it. And yet we string cherry picked facts together to form nice, neat stories. Stories with beginnings, middles and ends. Causes and effects. We construct cohesion out of complexity. We aim for simple and achieve simplistic.

We shouldn’t stop. But we should understand the limits of our understanding.

 

Myside bias

Elizabeth Kolbert, writing in The New Yorker:

“Consider what’s become known as “confirmation bias,” the tendency people have to embrace information that supports their beliefs and reject information that contradicts them. Of the many forms of faulty thinking that have been identified, confirmation bias is among the best catalogued; it’s the subject of entire textbooks’ worth of experiments.”

Later:

“[Hugo] Mercier and [Dan] Sperber prefer the term “myside bias.” Humans, they point out, aren’t randomly credulous. Presented with someone else’s argument, we’re quite adept at spotting the weaknesses. Almost invariably, the positions we’re blind about are our own.

A recent experiment performed by Mercier and some European colleagues neatly demonstrates this asymmetry. Participants were asked to answer a series of simple reasoning problems. They were then asked to explain their responses, and were given a chance to modify them if they identified mistakes. The majority were satisfied with their original choices; fewer than fifteen per cent changed their minds in step two.

In step three, participants were shown one of the same problems, along with their answer and the answer of another participant, who’d come to a different conclusion. Once again, they were given the chance to change their responses. But a trick had been played: the answers presented to them as someone else’s were actually their own, and vice versa. About half the participants realized what was going on. Among the other half, suddenly people became a lot more critical. Nearly sixty per cent now rejected the responses that they’d earlier been satisfied with.”

For me, there’s a subtle distinction between confirmation bias and myside bias.

Confirmation bias skews the way we process new information. Myside bias skews the way we critique existing views.

The illusion of explanatory depth

Elizabeth Kolbert writing in The New Yorker:

“Virtually everyone in the United States, and indeed throughout the developed world, is familiar with toilets. A typical flush toilet has a ceramic bowl filled with water. When the handle is depressed, or the button pushed, the water—and everything that’s been deposited in it—gets sucked into a pipe and from there into the sewage system. But how does this actually happen?

In a study conducted at Yale, graduate students were asked to rate their understanding of everyday devices, including toilets, zippers, and cylinder locks. They were then asked to write detailed, step-by-step explanations of how the devices work, and to rate their understanding again. Apparently, the effort revealed to the students their own ignorance, because their self-assessments dropped. (Toilets, it turns out, are more complicated than they appear.)

[Steven] Sloman and [Philip] Fernbach see this effect, which they call the “illusion of explanatory depth,” just about everywhere. People believe that they know way more than they actually do. What allows us to persist in this belief is other people. In the case of my toilet, someone else designed it so that I can operate it easily. This is something humans are very good at. We’ve been relying on one another’s expertise ever since we figured out how to hunt together, which was probably a key development in our evolutionary history. So well do we collaborate, Sloman and Fernbach argue, that we can hardly tell where our own understanding ends and others’ begins.”

We believe that we know more than we do. And trying to explain reveals our ignorance.

Or as Einstein said:

“If you can’t explain it to a six year old, you don’t understand it yourself.”

 

Rational bias

Nate Silver writing in The Signal and The Noise:

“When you have your name attached to a prediction your incentives may change. For instance, if you work for a poorly known firm, it may be quite rational for you to make some wild forecasts that will draw big attention when they happen to be right, even if they aren’t going to be right very often. Firms like Goldman Sachs, on the other hand, might be more conservative in order to stay within the consensus.

Indeed, this exact property has been identified in the blue chip forecasts: one study terms the phenomenon “rational bias”. The less reputation you have, the less you have to lose by taking a big risk when you make a prediction. Even if you know the forecast is dodgy, it might be rational for you to go after the big score. Conversely, if you have already established a good reputation, you might be reluctant to step too far far out of line even when you think the data demands it.”

The greater your reputation, the more conservative you are.

Known unknowns

During a press conference in 2002 a reporter questioned Donald Rumsfeld, the then US Secretary of Defence, about the presence of weapons of mass destruction in Iraq.

Rumsfeld’s response included the famous line:

“There are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns; there are things we do not know we don’t know.”

Unknown unknowns are a major problem when one is seeking to understand and asses a situation. You don’t know the critical piece of information and you don’t know that you need to know it.

Nate Silver discusses this idea in The Signal and the Noise and seeks to clarify the terms use:

“The concept of the unknown unknown is sometimes misunderstood. It’s common to see the term employed in formulations like this, to refer to a fairly specific (but hard-to predict) threat:

Nigeria is a good bet for a crisis in the not-too-distant future—an unknown unknown that poses the most profound implications for US and global security [emphasis added].

This particular prophecy about the terrorist threat posed by Nigeria was rather prescient (it was written in 2006, three years before the Nigerian national Umar Farouk Abdulmutallab tried to detonate explosives hidden in his underwear while aboard a flight from Amsterdam to Detroit). However, it got the semantics wrong. Anytime you are able to enumerate a dangerous or unpredictable element, you are expressing a known unknown. To articulate what you don’t know is a mark of progress.

Few things, as we have found, fall squarely into the binary categories of the predictable and the unpredictable. Even if you don’t know to predict something with 100 percent certainty, you may be able to come up with an estimate or a forecast of the threat. It may be a sharp estimate or a crude one, an accurate forecast or an inaccurate one, a smart one or a dumb one.* But at least you are alert to the problem and you can usually get somewhere: we don’t know exactly how much of a terrorist threat Nigeria may pose to us, for instance, but it is probably a bigger threat than Luxembourg.”

Generalists and specialists

The following extract is from a paper by the University of Michigan psychologist, Karl Weick, titled “Theory Building as Disciplined Imagination” (Academy of Management Review, 1989).

“Generalists, people with moderately strong attachments to many ideas, should be hard to interrupt, and once interrupted, should have weaker, shorter negative reactions since they have alternative paths to realize their plans. Specialists, people with stronger attachments to fewer ideas, should be easier to interrupt, and once interrupted, should have stronger, more sustained negative reactions because they have fewer alternative pathways to realize their plans. Generalists should be the the upbeat, positive people in the profession while specialists should be their grouchy, negative counterparts.”


This reminds me of Tim Brown and Ideo’s articulation of T-shaped people.

The concept defines two axes of knowledge: breadth and depth.

  • Horizontal people have broad but shallow knowledge. They know a little about a lot. In Weick’s terms, these are generalists.
  • Vertical people have narrow but deep knowledge. They know a lot about a little. In Weick’s terms, these are specialists.

Weick’s insight is that because specialists have invested a lot of time in a narrow field, they form strong attachments to their beliefs and react negatively when presented with new information that contradicts it.

Conversely, generalists have invested their time and effort across a number of fields and thus are more open to new, contradictory information.

When recruiting, Tim Brown looks for evidence of T-shaped people, that is, people with an expertise in one field as well as a deep curiosity in other fields.

Specialising helps them understand their field deeper.

Generalising keeps them open to new information, even if it doesn’t support their beliefs.

Being a generalist makes you better at being a specialist.

It stops you getting too attached.

It stops you from overvaluing the confirmatory and undervaluing the contradictory.

It helps to avoid the confirmation bias.

It opens them up to the idea of new information.

Karl Popper took the idea of generalising one step further.

He teaches us to not only be open to new ideas but to seek them out.
To actively pursue information that disproves our currently beliefs.

This, he argues, is the fastest way to getting to an empirical truth.

Socrates agreed.

Bayes’s Theorem

The following extract is taken from Nate Silver’s book “The Signal and the Noise”.

Thomas Bayes’s paper, “‘An Essay toward Solving a Problem in the Doctrine of Chances’, was not published until after his death, when it was brought to the Royal Society’s attention in 1763 by a friend of his named Richard Price. It concerned how we formulate probabilistic beliefs about the world when we encounter new data.

Continue Reading →