The problem of induction

Nassim Nicholas Taleb writing in his book Black Swan:

“The überphilosopher Bertrand Russell presents a particularly toxic variant of my surprise jolt in his illustration of what people in his line of business call the Problem of Induction or Problem of Inductive Knowledge.

How can we logically go from specific instances to reach general conclusions? How do we know what we know? How do we know what we have observed from given objects and events suffices to enable us to figure out their other properties? There are traps built into any kind of knowledge gained from observation.

Consider a turkey that is fed every day. Every single feeding will firm up the bird’s belief that it is the general rule of life to be fed every day by friendly members of the human race “looking out for its best interests,” as a politician would say. On the afternoon of the Wednesday before Thanksgiving, something unexpected will happen to the turkey. It will incur a revision of belief.

How can we know the future, given knowledge of the past; or, more generally, how can we figure out the properties of the (infinite) unknown based on the (finite) known? Think of the feeding again: what can our turkey learn about what is in store for it tomorrow from the events of yesterday? A lot, perhaps, but certainly a little less than it thinks, and it is just that “little less” that may make all the difference.

Let us go one step further and consider induction’s most worrisome aspect: learning backward. Consider that the turkey’s experience may have, rather than no value, a negative value. It’s learned from observation, as we are all advised to do (hey, after all, this is what is believed to be the scientific method). Its confidence increased as the number of friendly feedings grew, and it felt increasingly safe even though the slaughter was more and more imminent. Consider that the feeling of safety reached its maximum when the risk was at the highest! But the problem is even more general than that; it strikes at the nature of empirical knowledge itself. Something has worked in the past, until – well, it unexpectedly no longer does, and what we have learned from the past turns out to be at best irrelevant or false, at worst viciously misleading.”

The past cannot always predict the future.

No wonder economic predictions usually fail

Tournament effect

Nassim Nicholas Taleb writing in his book Black Swan:

Let me start with the economist Sherwin Rosen. In the early 80s, he wrote the papers about “the economics of superstars.” In one of the papers he conveyed his sense of outrage that a basketball player could earn $1.2 million a year, or a television celebrity could make $2 million. To get an idea of how this concentration is increasing – i.e., of how we are moving away from Mediocristan – considerate that television celebrities and sports stars brackets (even in Europe) get contracts today, only two decades later, where in the hundreds of millions of dollars! The extreme is about brackets (so far) 20 times higher than it was two decades ago!

According to Rosen, this inequality comes from a tournament effect: someone who is marginally “better” can easily win the entire pot, leaving the others with nothing. Using an argument from chapter 3 people prefer to pay $10.99 for a recording featuring Horowitz to $9.99 for a struggling pianist. Would you rather read Kundera for $13.99 or some unknown author for $1. So it’s looks like a 20 minutes, where the winner grabs the whole thing – and he does not have to win by much.

Theory of complexity

In his book Black Swan, the essayist Nicholas Nassim Taleb provides a functional definition of complex domains:

“A complex domain is characterised by the following: there is a great degree of independence between its elements, both temporal (a variable depends on its past changes), horizontal (variables depend on one another), and diagonal (variable A depends on the past history of variable B). As a result of this independence, mechanisms are subjected to positive, reinforcing feedback loops.”

Nate Silver expands on this brief introduction with a more illustrative description in his book The Signal and the Noise:

“The theory of complexity that the late physicist Per Bak and others developed is different from chaos theory, although the two are often lumped together. Instead, the theory suggests that a very simple things can behave in strange and mysterious ways when they interact with one another.

Bak’s favourite example was that of a sandpile on the beach. If you drop another grain of sand onto the pile (…) it can actually do one of three things. Depending on the shape and size of the pile, it might stay more or less where it lands, or it might cascade gently down the small hill towards the bottom of the pile. Or it might do something else: if the pile is too steep, it could destabilise the entire system and trigger a sand avalanche.”

Just imagine the number of different ways that the sandpile could be configured. And just imagine the number of ways the falling grain of sand could hit the pile. Despite being such a simple object (a sandpile) the number of possible interactions between its constituent parts are innumerable. And each potential scenario would have a different result.

But of course a pile of sand containing thousands of irregular grains is complex. An simpler example would be the initial break in a game of pool. 16 spheres on a flat surface. But still, how many times would you have to break until every ball landed in the exact same positions?

These are complex systems.

Whilst Silver is quick to distinguish between complexity and chaos, it’s worth noting that Tim Harford is also keen to make a distinction. In his book Adapt he separates the concepts of complex systems and tightly coupled systems.

To put it simple, complex systems have a lot of possible, hard to predict scenarios. Some will destabilise the entire system, some won’t. Tightly coupled systems are always the latter.

Wealth effect

Nate Silver describes the wealth effect in The Signal and the Noise:

“Nonhousehold wealth – meaning the sum total of things like savings, stock, pensions, cash, and equity in small businesses – declined by 14% for the median family between 2001 and 2007. When the collapse of the housing bubble whiped essentially all their housing equity of the books, middle-class Americans found they were considerably worse off than they had been a few years earlier.

The decline in consumer spending that resulted as consumers came to take a more realistic view of their finances – what economists call a “wealth effect” – is variously estimated at between 1.5% and 3.5% of GDP per year, potentially enough to turn average growth into a recession.”

When people believe themselves to be wealthier, they tend to spend more money.

This rule of thumb seems to hold true even when wealth is tied up in assets.

For example, although a rise in the value of your property doesn’t provide you with more cash right now, it does seem to cause you to spend more.

Alternative, although a fall in the value of your stock portfolio doesn’t reduce the amount of money available to you right now, it does seem to cause you to spend less

 

Law of large numbers

From Priceless by American author, columnist and skeptic William Poundstone:

“This says that flipping a fair coin a large number of times will give you a percentage of heads close to 50. That is all you can ask of a fair coin. You can’t predict the outcome of a small number of tosses. However, Tversky and Kahneman noted, people want to believe just that they suppose that’s flipping a coin 10 times will yield five heads and five tales, or something close to it. In reality, lopsided outcomes (like eight heads and two tails) are more common than people believe. Tversky and Kahneman Survey aid some mathematical psychologists at a meeting and found that even the experts were subject to this error. The article’s most memorable line displays a playful with it rarely encountered in scientific papers: ‘peoples intuitions about random sampling appear to satisfy the law of small numbers, which asserts that the law of large numbers applies to small numbers as well.”

Put simply, the more times an experiment is run, the closer the average result will be to the expected value.

Coherent arbitrariness

From Priceless, a book studying the hidden psychology of value, by William Poundstone (pictured):

“Skippy peanut butter recently redesigned its plastic jar. “The jar used to have a smooth bottom,” explained Frank Luby, a price consultant with Simon-Kucher & Partners in Cambridge, Massachusetts. “It now has an indentation, which takes a couple of ounces of peanut butter out of the product.” The old jar contained 18 ounces; the new has 16.3. The reason, of course, is so that Skippy can charge the same price.

That dimple at the bottom of the peanut butter jar has much to do with a new theory of pricing, one known in psychology literature as coherent arbitrariness. This says that consumers really don’t know what anything should cost. They wander the supermarket aisle in a half conscious daze, judging prices from cues, helpful and otherwise. Coherent arbitrariness is above all a theory of relativity. Buyers are mainly sensitive to relative differences, not absolute prices. The new Skippy jar essentially amounts to a 10 per cent increase in the price of peanut butter. Had they just raised the price 10 per cent (to $3.39, say), shoppers would have noticed and some would have changed brands. According to the theory, the same shopper would be perfectly happy to pay $3.39 for Skippy, just as long as she doesn’t know there’s been an increase.”

Okun’s Law

The following extract is taken from The Signal and the Noise by the American statistician, Nate Silver. In it he describes the economic relationship between job growth and GDP growth as first proposed by Arthur Melvin Okun in 1962.

“The American and global economy is always evolving, and the relationships between different economic variables can change over the course of time.

Historically, for instance, there has been a reasonably strong correlation between GDP growth and job growth. Economists refer to this as Okun’s Law. During the Long Boom of 1947 through 1999, The rate of job growth had normally been about half the rate of GDP growth, so if GDP increased by 4% during a year, the number of jobs would increase by about 2%.

The relationship still exists – more gross is certainly better for jobseekers. But it’s dynamics seems to have changed. After each of the last couple of recessions, considerably fewer jobs were created than would have been expected during the Long Boom years. In the year after the stimulus package was passed in 2009, for instance, GDP was growing fast enough to create about two million jobs according to Okun’s law. Instead, an additional 3.5 million jobs were lost during the period.”

Tight Coupling

The following excerpt is taken from ‘Adapt: Why Success Always Starts With Failiure‘ by Tim Harford.

“The defining characteristic of a tightly coupled process is that once it starts, it’s difficult or impossible to stop: A domino toppling display is not especially complex but it is tightly coupled so is a loaf of bread rising in the oven. Harvard University, on the other hand, is not especially tightly coupled, but is complex. A change in US student Visa policy; or a new government scheme to fund research; or the appearance of a fashionable book in economic’s, or physics, or anthropology; or an internecine academic row – could have unpredictable consequences for Harvard and trigger a range of unexpected responses, but none will spiral out of control quickly enough to destroy the University all together.”

In complex systems, there are many different ways for things to go wrong. In tightly coupled systems the consequences of something going wrong proliferate throughout the system quickly.

Put simply, tightly coupled systems are susceptible to the domino effect.

Philip Tetlock and the limits of expertise

In his book ‘The Signal and the Noise’, the American statistician and writer, Nate Silver, references an interesting study conducted by Philip Tetlock (pictured).

“The forecasting models published by political scientists in advance of the 2000 presidential election predicted a landslide 11-point victory for Al Gore. George W. Bush won instead. Rather than being an anomalous result, failures like these have been fairly common in political prediction. A long-term study by Philip E. Tetlock of the University of Pennsylvania found that when political scientists claimed that a political outcome had absolutely no chance of occurring, it nevertheless happened about 15 percent of the time.”

Tim Harford expands upon this brief introduction in his book ‘Adapt: Why Success Always Starts with Failure’:

Continue Reading →