The two types of reputation: capability and character

Ian Leslie writing in the New Statesmen:

The Reputation Game is written by two people from the PR business, David Waller and Rupert Younger. They introduce a useful distinction between two types of reputation: capability and character. The first refers to competence in a specific task, such as cooking a meal, providing mortgages, or making aircraft engines. The second refers to moral or social qualities. Someone can have a great reputation for competence, while at the same time being regarded as slippery or unpleasant. Uber is good at what it does, but you wouldn’t invite it home to meet your mother.

Capability reputations are sticky: they take a long time to wash away. An author who writes great novels early in his career can produce many mediocre ones before people start to question if he is any good (naming no names, Salman Rushdie). Character reputations are more flammable, especially in a world where social media can instantly detonate bad news. A strong reputation for competence defends you against character problems, but only for so long, as Uber is finding out. When your character reputation is destroyed, competence becomes immaterial.

Black Swans

The Black Swan, is a book by the author and essayist Nassim Nicholas Taleb.

It’s a far-reaching piece that calls into question the strength of existing information and the validity of its use in forecasting changes.

The book’s subject matter is so broad, and its implications so varied, that no summary could ever really do it justice.

As such, I shall only attempt to distill its most central theme. The opening few pages offer a good place to start:

“Before the discovery of Australia, people in the whole world were convinced that all swans were white, an unassailable belief as it seemed completely confirmed by empirical evidence. The sighting of the first black swan might have been an interesting surprise for a few ornithologists (and others extremely concerned with the colouring of birds), but that is not where the significance of the story lies. It illustrates a severe limitation to our learning from observations or experience and the fragility of our knowledge. One single observation can invalidate a general statement derived from millennia of confirmatory sightings of millions of white swans. All you need is one single (and, I am told, quite ugly) black bird.”

Many white swan sightings led to a general theory that all swans were white. Every subsequent sighting confirmed and enforced this belief. This continued for thousands of years, with each new sighting adding a new data point in support of the theory. The theory grew stronger and belief in it trended towards the absolute.

And then a black swan was spotted.

That one sighting disproved a theory built on a millenia of confirmatory evidence.

The value of one disconfirmatory sighting was greater than the value of a million confirmatory sightings.

The implication is significant:

“Black Swan logic makes what you don’t know far more relevant than what you do know.”

Clearly we we should embrace the fact that we don’t know everything. We should try to uncover the unknown unknowns. We should hunt for information which falsifies our beliefs and be wary of that which confirms them.

In other words, we should try to find the ‘black swans’.

Taleb defines these ‘black swans’ with the following three criteria:

“First, it is an outlier, as it lies outside the realm of regular expectations, because nothing in the past can convincingly point to its possibility. Second, it carries an extreme impact (unlike the bird). Third, in spite of its outlier status, human nature makes us concoct explanations for its occurrence after the fact, making it explainable and predictable.”

Clearly, despite our best efforts to seek them out, knowledge-changing, ‘black swans’ are inherently extremely difficult to uncover or predict.

This is known as the problem of induction. Confirmatory evidence lulls us into a false sense of security. It builds an unjustified sense of surety. We believe that what has happened before will continue to happen in the future. The more something happens in the past, the more we believe it will continue to happen. The turkey who gets fed every morning becomes increasingly confident that this will continue to occur. All evidence points in the same direction. And it does, right up until it doesn’t.

This is why even experts fail to forecast with any real degree of accuracy. They construct their predictions based on a history of stable, consistent and well aligned data, completely oblivious of a huge disruptive event lurking just around the corner.

For example how many experts correctly foresaw the 1987 stock market crash? How many predictions had been made that had failed to account for such a seismic event?

And when one comes, the experts post-rationalise it. They claim it was predictable and expected. They fall for the narrative fallacy. They reduce a chaotic, tightly coupled and complex set of events down into a simple and digestible, but woefully inadequate, explanation.

To avoid black swan traps we need to rethink how we assess information, construct opinions, and make predictions. No small task. But we can start with a basic principle of critical thinking: know that you don’t know.

Kaizen and the five whys

The following is taken from “Stop problem solving” by Gareth Kay.

“As part of its effort to reinvigorate itself, Toyota introduced the approach of kaizen (simply meaning ‘change for better’). Overall, this was about ensuring continuous improvement but one of its key tenets was the Five Whys technique. Taiichi Ohno [pictured], the architect of the Toyota Production System in the 1950s, encouraged his team to “observe the production floor without preconception. Ask ‘why’ five times about every matter … by repeating why five times, the nature of the problem as well as its solution becomes clear.” He goes on to offer the example of a robot stopping. By asking why five times, the problem to be solved becomes clearer: no filter on the oil pump, rather than an overloaded circuit to which initial analysis would point. The ability to ask why until you can ask why no more is an incredibly important skill we forget far too often. When we do this, we begin to find the real, root problem we need to solve, rather than the symptom that is far too frequently the result of the typical problem-solving approach.”