The three different types of error

Tim Harford writing in his book Adapt:

James Reason, The scholar of catastrophe who uses Nick Leeson and Barings Bank as a case study to help engineers prevent accidents, is careful to distinguish between three different types of error. The most straightforward are slips, when through clumsiness or lack of attention you do something you simply didn’t mean to do. In 2005, a young Japanese trader tried to sell one share at a price of ¥600,000 and instead sold 600,000 shares at the bargain price of ¥1. Traders call these slips ‘fat finger errors’ and this one cost £200 million.

Then there are violations, which involve someone deliberately doing the wrong thing. Bewildering accounting tricks like those employed at Enron, or the cruder fraud of Bernard Madoff, are violations, and the incentives for them are much greater in finance than in industry.
Most insidious are mistakes. Mistakes are things you do on purpose, but with unintended consequences, because your mental model of the world is wrong. When the supervisors at Piper Alpha switched on a dismantled pump, they made a mistake in this sense. Switching on the pump was what they intended, and they followed all the correct procedures. The problem was that their assumption about the pump, which was that it was all in one piece, was mistaken.

Group think

Tim Harford writing in his book Adapt:

Irving Janis’s classic analysis of the Bay of Pigs and other foreign policy fiascoes, Victims of GroupThink, explains that A strong team – a ‘kind of family’ – Can quickly forward into the habit of reinforcing each other’s prejudices out of simple teen spirit and a desire to bolster the group.

This reminds me of the concept of Echo Chambers, where a person’s beliefs are strengthened when they are exposed only to opinions that align with their own.

To break free of GroupThink and Echo Chambers we need to seek out contradictory, contrarian or disconfirmatory opinions. We need to try and falsify our current position in order to progress towards a truth.

Harford continues:

Janis details the way in which John F. Kennedy fooled himself into thinking that he was gathering a range of views and critical comments. All the while his team of advisors where unconsciously giving each other a false sense of infallibility. Later, during the Cuban Missile Crisis, Kennedy was far more aggressive about demanding alternative options, exhaustively exploring risks, and breaking up his advisory groups to ensure that they didn’t become too comfortable.

Theory of complexity

In his book Black Swan, the essayist Nicholas Nassim Taleb provides a functional definition of complex domains:

“A complex domain is characterised by the following: there is a great degree of independence between its elements, both temporal (a variable depends on its past changes), horizontal (variables depend on one another), and diagonal (variable A depends on the past history of variable B). As a result of this independence, mechanisms are subjected to positive, reinforcing feedback loops.”

Nate Silver expands on this brief introduction with a more illustrative description in his book The Signal and the Noise:

“The theory of complexity that the late physicist Per Bak and others developed is different from chaos theory, although the two are often lumped together. Instead, the theory suggests that a very simple things can behave in strange and mysterious ways when they interact with one another.

Bak’s favourite example was that of a sandpile on the beach. If you drop another grain of sand onto the pile (…) it can actually do one of three things. Depending on the shape and size of the pile, it might stay more or less where it lands, or it might cascade gently down the small hill towards the bottom of the pile. Or it might do something else: if the pile is too steep, it could destabilise the entire system and trigger a sand avalanche.”

Just imagine the number of different ways that the sandpile could be configured. And just imagine the number of ways the falling grain of sand could hit the pile. Despite being such a simple object (a sandpile) the number of possible interactions between its constituent parts are innumerable. And each potential scenario would have a different result.

But of course a pile of sand containing thousands of irregular grains is complex. An simpler example would be the initial break in a game of pool. 16 spheres on a flat surface. But still, how many times would you have to break until every ball landed in the exact same positions?

These are complex systems.

Whilst Silver is quick to distinguish between complexity and chaos, it’s worth noting that Tim Harford is also keen to make a distinction. In his book Adapt he separates the concepts of complex systems and tightly coupled systems.

To put it simple, complex systems have a lot of possible, hard to predict scenarios. Some will destabilise the entire system, some won’t. Tightly coupled systems are always the latter.

Tight Coupling

The following excerpt is taken from ‘Adapt: Why Success Always Starts With Failiure‘ by Tim Harford.

“The defining characteristic of a tightly coupled process is that once it starts, it’s difficult or impossible to stop: A domino toppling display is not especially complex but it is tightly coupled so is a loaf of bread rising in the oven. Harvard University, on the other hand, is not especially tightly coupled, but is complex. A change in US student Visa policy; or a new government scheme to fund research; or the appearance of a fashionable book in economic’s, or physics, or anthropology; or an internecine academic row – could have unpredictable consequences for Harvard and trigger a range of unexpected responses, but none will spiral out of control quickly enough to destroy the University all together.”

In complex systems, there are many different ways for things to go wrong. In tightly coupled systems the consequences of something going wrong proliferate throughout the system quickly.

Put simply, tightly coupled systems are susceptible to the domino effect.

Risk compensation

The following excerpt is taken from ‘Adapt: Why Success Always Starts With Failiure‘ by Tim Harford.

“There were two ways in which these credit default swaps led to trouble. The first is simply that having insured some of their gambles, the banks felt confident in raising the stakes. Regulators approved; so did the credit rating agencies responsible for evaluating these risks; so did most banks shareholders. John Lanchester, A chronicler of the crisis, quips, ‘it’s as if people used the invention of seatbelts as an opportunity to take up drunk-driving.’ Quite so – and in fact there is evidence that seat belts and airbags do indeed encourage drivers to behave more dangerously. Psychologists call this risk compensation. The entire point of the CDS what to create a margin of safety that would let banks take more risks. As with safety belts and dangerous drivers, innocent bystanders were among the casualties.”

I realise the extract is a little opaque without its surrounding context.

So I’ll try and boil its central concept down.

When the level of risk decreases, people compensate by increasing the size of the gamble.

Asch Conformity Experiments

In the 1950s, the psychologist Solomon Asch conducted a series of experiments studying if and how individuals yield to or defy the beliefs and opinions of a majority group.

Tim Harford outlines this experiment in his book, ‘Adapt, Why Success Always Starts with Failure’.

“The classic Asch experiment sat several young men around a table and showed them a pair of cards, one with a single line, and one with three lines of obviously different lengths, labelled A, B and C. The experiment asked subjects to say which of the three lines was the same length as the single line on the other card. This was a trivially easy task, but there was a twist: all but one of the people sitting around the table were actors recruited by Asch. As they went around the table, each one called out the same answer – a wrong answer. By the time Asch turned to the real experimental subject, the poor man would be baffled. Frequently, he would fall in with the group, and later interviews revealed that this was often because he genuinely believed his eyes were deceiving him. As few as three actors were enough to create this effect.”


William Poundstone offers some more precise findings in his book Priceless:

“Overall, subjects gave a wrong answer 32 per cent of the time. Seventy-four per cent gave the wrong answer at least once, and a sizeable minority caved in to peer pressure three-quarters of the time. That’s amazing when you consider how simple the exercise was. In a control group, without accomplices, virtually everyone gave the right answer all the time.”

A further experiment, however, sheds light on how this peer pressure can be released. Harford continues:

“Less famous but just as important is Asch’s follow-up experiment, in which one of the actors gave a different answer from the rest. Immediately, the pressure to conform was released. Experimental subjects who gave the wrong answer when outnumbered ten to one happily dissented and gave the right answer when outnumbered nine to two. Remarkably, it didn’t even matter if the fellow dissenter gave the right answer himself. As long as the answer was different from the group, that was sufficient to free Asch’s poor subjects from their socially imposed cognitive straight jackets.”

The experiment shows that people often feel a pressure to conform to the wider group, even when the majority is clearly misguided.

It is not difficult to see how an environment devoid of opposition views can create an echo chamber of escalating and self-reinforcing beliefs.

Just one opinion that opposes the majority view is enough to release the peer pressure. One dissenting opinion seems to give others the permission to table their true beliefs.

The experiment teaches us to encourage dissenting views. Even if they are misguided. Dissenting views create an environment in which others are more comfortable in being forthcoming with their own ideas.

The limits of expertise

In his book ‘The Signal and the Noise’, the American statistician and writer, Nate Silver, references an interesting study conducted by Philip Tetlock (pictured).

“The forecasting models published by political scientists in advance of the 2000 presidential election predicted a landslide 11-point victory for Al Gore. George W. Bush won instead. Rather than being an anomalous result, failures like these have been fairly common in political prediction. A long-term study by Philip E. Tetlock of the University of Pennsylvania found that when political scientists claimed that a political outcome had absolutely no chance of occurring, it nevertheless happened about 15 percent of the time.”

Tim Harford expands upon this brief introduction in his book ‘Adapt: Why Success Always Starts with Failure’:

“Even deep expertise is not enough to solve today’s complex problems.

Perhaps the best illustration of this comes from an extraordinary two-decade investigation into the limits of expertise, begun in 1984 by a young psychologist called Philip Tetlock. He was the most junior member of a committee of the National Academy of Sciences charged with working out what the Soviet response might be to Reagan administration’s hawkish stance in the cold war. Would Reagan call the bluff of a bully or was he about to provoke a deadly reaction? Tetlock canvassed every expert he could find. He was struck by the fact that, again and again, the most influential thinkers on the Cold War flatly contradicted one another. We are so used to talking heads disagreeing that perhaps this doesn’t seem surprising But when we realise that the leading experts cannot agree on the most basic level about the key problem of the age, we begin to understand that this kind of expertise is far less useful than we might hope.

Tetlock didn’t leave it at that. He worries away at this question of expert judgment for twenty years. He rounded up nearly three hundred experts – by which he meant people whose job it was to comment or advise on political and economic trends. They were a formidable bunch: political scientists, economists, lawyers and diplomats. There were spooks and think-tankers, journalists and academics. Over half of them had PhDs; almost all had post-graduate degrees. And Tetlock’s method for evaluating the quality of their expert judgement was to pin the experts down: he asked them to make specific, quantifiable forecasts – answering 27450 of his questions between them – and then waited to see whether their forecasts came true. They rarely did. The experts failed, and their failure to forecast the future is a symptom of their failure to understand fully the complexities of the present.”

In highly complex circumstances, there is a limit to the value of expertise. A certain amount quickly improves one’s success rate when predicting the future. However the rate of improvement soon begins to plateau. Before long, big increases in expertise can yield small gains in the success of one’s predictions.

For wildly complex situations, the most successful predictors incorporate ideas from different disciplines, pursue multiple approaches at the same time, are willing to acknowledge mistakes and rely more on observation than theory.