The two types of reputation: capability and character

Ian Leslie writing in the New Statesmen:

The Reputation Game is written by two people from the PR business, David Waller and Rupert Younger. They introduce a useful distinction between two types of reputation: capability and character. The first refers to competence in a specific task, such as cooking a meal, providing mortgages, or making aircraft engines. The second refers to moral or social qualities. Someone can have a great reputation for competence, while at the same time being regarded as slippery or unpleasant. Uber is good at what it does, but you wouldn’t invite it home to meet your mother.

Capability reputations are sticky: they take a long time to wash away. An author who writes great novels early in his career can produce many mediocre ones before people start to question if he is any good (naming no names, Salman Rushdie). Character reputations are more flammable, especially in a world where social media can instantly detonate bad news. A strong reputation for competence defends you against character problems, but only for so long, as Uber is finding out. When your character reputation is destroyed, competence becomes immaterial.

Black Swans

The Black Swan, is a book by the author and essayist Nassim Nicholas Taleb.

It’s a far-reaching piece that calls into question the strength of existing information and the validity of its use in forecasting changes.

The book’s subject matter is so broad, and its implications so varied, that no summary could ever really do it justice.

As such, I shall only attempt to distill its most central theme. The opening few pages offer a good place to start:

“Before the discovery of Australia, people in the whole world were convinced that all swans were white, an unassailable belief as it seemed completely confirmed by empirical evidence. The sighting of the first black swan might have been an interesting surprise for a few ornithologists (and others extremely concerned with the colouring of birds), but that is not where the significance of the story lies. It illustrates a severe limitation to our learning from observations or experience and the fragility of our knowledge. One single observation can invalidate a general statement derived from millennia of confirmatory sightings of millions of white swans. All you need is one single (and, I am told, quite ugly) black bird.”

Many white swan sightings led to a general theory that all swans were white. Every subsequent sighting confirmed and enforced this belief. This continued for thousands of years, with each new sighting adding a new data point in support of the theory. The theory grew stronger and belief in it trended towards the absolute.

And then a black swan was spotted.

That one sighting disproved a theory built on a millenia of confirmatory evidence.

The value of one disconfirmatory sighting was greater than the value of a million confirmatory sightings.

The implication is significant:

“Black Swan logic makes what you don’t know far more relevant than what you do know.”

Clearly we we should embrace the fact that we don’t know everything. We should try to uncover the unknown unknowns. We should hunt for information which falsifies our beliefs and be wary of that which confirms them.

In other words, we should try to find the ‘black swans’.

Taleb defines these ‘black swans’ with the following three criteria:

“First, it is an outlier, as it lies outside the realm of regular expectations, because nothing in the past can convincingly point to its possibility. Second, it carries an extreme impact (unlike the bird). Third, in spite of its outlier status, human nature makes us concoct explanations for its occurrence after the fact, making it explainable and predictable.”

Clearly, despite our best efforts to seek them out, knowledge-changing, ‘black swans’ are inherently extremely difficult to uncover or predict.

This is known as the problem of induction. Confirmatory evidence lulls us into a false sense of security. It builds an unjustified sense of surety. We believe that what has happened before will continue to happen in the future. The more something happens in the past, the more we believe it will continue to happen. The turkey who gets fed every morning becomes increasingly confident that this will continue to occur. All evidence points in the same direction. And it does, right up until it doesn’t.

This is why even experts fail to forecast with any real degree of accuracy. They construct their predictions based on a history of stable, consistent and well aligned data, completely oblivious of a huge disruptive event lurking just around the corner.

For example how many experts correctly foresaw the 1987 stock market crash? How many predictions had been made that had failed to account for such a seismic event?

And when one comes, the experts post-rationalise it. They claim it was predictable and expected. They fall for the narrative fallacy. They reduce a chaotic, tightly coupled and complex set of events down into a simple and digestible, but woefully inadequate, explanation.

To avoid black swan traps we need to rethink how we assess information, construct opinions, and make predictions. No small task. But we can start with a basic principle of critical thinking: know that you don’t know.

The Borgusky technique

Fairs Yakob, writing for WARC:

“When Alex Bogusky was creative director of the agency that bears his name, he insisted on being read the press release before seeing the creative work. If the press release was uninspiring, he would refuse to look at the work. Bogusky understood that the role of advertising is to make things famous.

In the ad-saturated environment of the noughties, the decade of which he was crowned creative director, the best way to ensure that was to make advertising generate its own PR.”

This reminds me of a similar technique employed by Amazon.

“Before Amazon developers write a single line of code, they have to write the hypothetical product’s press release and FAQ announcement.

Amazon uses this “working backwards” approach because it forces the team to get the most difficult discussions out of the way early (…). They need to fully understand what the product’s value proposition will be and how it will be pitched to customers. If the team can’t come up with a compelling press release, the product probably isn’t worth making.

It also helps with more rapid iteration and keeps the team on track, Jassy explained.

Jassy’s AWS team isn’t the only one that uses this atypical product approach: It’s institutionalized throughout all of Amazon, according to Brad Stone’s book The Everything Store”.

Sometimes bad ideas can be dressed up to produce passable work. But the underlying idea is still broken.

Starting with the PR or press release separates the idea from the execution. It forces you to asses the former without the distraction of the latter.

If you can’t excite your audience by communicating your idea in simple language, go back to the drawing board.

The three types of awareness

Brian Brydon writing on BBDO’s Comms Planning Medium channel:

Recall (a.k.a., unaided awareness, spontaneous awareness):

The percentage of your audience that lists your brand first when prompted with the brand category.

The percentage of your audience that can name your brand when prompted with the brand category and there is no limit to the number of brands they can name (i.e., Toyota Prius and the eco-friendly cars category).

Recognition (a.k.a., aided awareness):

The percentage of your audience that says they know your brand after being prompted with an explicit brand cue (i.e., brand name or logo).

Top of Mind Awareness:

The percentage of your audience that lists your brand first when prompted with the brand category.”

I think that, as usual, our industry has over complicated awareness. As you can see, each of the three metrics have multiple names. I prefer to keep it simple. I only ever use one name for each: unaided awareness, aided awareness and top of mind awareness. I propose that you do the same.

Anchoring and adjustment

William Poundstone writing in his book Priceless:

Daniel Kahneman and Amos Tversky “used one piece of apparatus, a carnival-style wheel of fortune marked with numbers up to 100. A group of university students watched as the wheel was spun to select a random number. You can play along – imagine that the wheel is spinning right now and the number is… 65. Now answer this two-part question:

(a) Is the percentage of African nations in the United Nations higher or lower than 65 (the number that just came up on the wheel)?

(b) What is the percentage of African nations in the United Nations?

Like many experiments, and some wheels of fortune, this one was rigged. The wheel was designed to produce one of only two numbers, 10 and 65. This rigging was done only to simplify analysis of the results. In any event, Tversky and Kahneman found that the allegedly random number affected the answers to the second question. The effect was huge.

When the wheel stopped on 10, the average estimates of the proportion of African nations in the UN was 25 percent. But when the wheel of fortune number was 65, the average guess was 45 percent. The latter estimate was almost twice the first. The only difference was that the estimates had been exposed to a different ‘random’ number that they knew to be meaningless.

Tversky and Kahneman used the term ‘anchoring and adjustment’ for this. In there now classic 1974 Science article, ‘Judgement Under Uncertainty: Heuristics and Biases,’ they theorised that an initial value (the ‘anchor’) serves as a mental benchmark or starting point for estimating an unknown quantity. Here, the wheel of fortune number was the anchor. The first part of the question that the subjects compare the anchor to the quantity to be estimated. Tversky believed that the subject then mentally adjusted the anchor upward or downward to arrive at their answers to the second part of the question. This adjustment was usually inadequate. The answer ended up being closer to the anchor than it should be. To someone inspecting only the final outcomes, it’s as if the anchor exerts a magnetic attraction, pulling estimates closer to itself.

By the way, how did your answer compare to the 65-group’s average of 45 percent? In case you’re wondering, the correct fraction of African UN member nations is currently 23 percent.”

Nassim Nicholas Taleb summarises this experiment and provides some further examples in his book The Black Swan:

“Kahneman and Tversky had their subjects spin a wheel of fortune. The subjects first looked at the number on the wheel, which they knew was random, then they were asked to estimate the number of African countries in the United Nations. Those who had a low number on the wheel estimated a low number of African nations; those with a high number produced a high estimate.

Similarly, ask someone to provide you with the last four digits of his Social Security number. Then ask him to estimate the number of dentists in Manhattan. You will find that by making him aware of the four digit number, you will ellicit an estimate that is correlated with it.

We use reference points in our heads … and start building beliefs around them because less mental effort is needed to compare them with to a reference point than to evaluate it in the absolute.”

Geographic profiling and distance decay

Alec Wilkinson writing for the New Yorker:

By reading meaning into the geography of victims and their killers, Hargrove is unwittingly invoking a discipline called geographic profiling, which is exemplified in the work of Kim Rossmo, a former policeman who is now a professor in the School of Criminal Justice at Texas State University. In 1991, Rossmo was on a train in Japan when he came up with an equation that can be used to predict where a serial killer lives, based on factors such as where the crimes were committed and where the bodies were found. As a New York City homicide detective told me, “Serial killers tend to stick to a killing field. They’re hunting for prey in a concentrated area, which can be defined and examined.” Usually, the hunting ground will be far enough from their homes to conceal where they live, but not so far that the landscape is unfamiliar. The farther criminals travel, the less likely they are to act, a phenomenon that criminologists call distance decay.

The innovative and the incremental

Morgan Housel writing for Collaborative Fund:

Things that are instantly adored are usually just slight variations over existing products. We love them because they’re familiar. The most innovative products – the ones that truly change the world – are almost never understood at first, even by really smart people.

It happened with the telephone. Alexander Graham Bell tried to sell his invention to Western Union, which quickly replied:

This `telephone’ has too many shortcomings to be seriously considered as a practical form of communication. The device is inherently of no value to us. What use could this company make of an electrical toy?

It happened with the car. Twenty years before Henry Ford convinced the world he was onto something, Congress published a memo, warning:

Horseless carriages propelled by gasoline might attain speeds of 14 or even 20 miles per hour. The menace to our people of vehicles of this type hurtling through our streets and along our roads and poisoning the atmosphere would call for prompt legislative action. The cost of producing gasoline is far beyond the financial capacity of private industry… In addition the development of this new power may displace the use of horses, which would wreck our agriculture.

It happened with the index fund – easily the most important financial innovation of the last half-century. John Bogle launched the first index fund in 1975. No one paid much attention to for next two decades. It started to gain popularity, an inch at a time, in the 1990s. Then, three decades after inception, the idea spread like wildfire.

A worthwhile reminder for an industry that conflates the innovative and the incremental.

Confirmation bias

Nassim Nicholas Taleb, writing in The Black Swan:

“Cognitive scientists have studied our natural tendency to look only for corroboration; they call this vulnerability to the collaboration error the confirmation bias.

The first experiment I know of concerning this phenomenon was done by the psychologist P. C. Wason. He presented the subjects with the three number sequence 2, 4, 6, and asked them to try to guess the rule generating it. There method of getting us to produce other three number sequences, to which the experimenter would respond “yes” or “No” depending on whether the new sequences were consistent with the rule. Once confident with their answers, the subjects would formulate the rule. … with the correct rule was “numbers in ascending order,” nothing more. Very few subjects discovered it because in order to do so I had to offer a series in descending order (what the experimenter word say “no” to). Wason noticed that the subjects had a role in mind, but gave him examples aimed at confirming it instead of trying to supply series that were inconsistent with their hypothesis. Subjects tenaciously kept trying to confirm the rules that they had made up.”

Peter Cathcart Wason‘s 2-4-6 problem proved that we have a tendency to seek out evidence which confirms our ideas rather than that which falsifies them.

Alvin Toffler echoed a sentiment, and borrowed a fair amount from Leon Ferstinger‘s concept of cognitive dissonance, 10 years later.

Nate Silver writing in The Signal and The Noise:

“Alvin Toffler, writing in the book Future Shock in 1970, predicted some of the consequences of what he called “information overload”. He thought our defence mechanism would be to simplify the world in ways that confirmed our biases, even as the world itself was growing more diverse and more complex.”

Silver later sums up the bias succinctly:

“The instinctual shortcut that we take when we have ‘too much information’ is to engage with it selectively, picking out the parts we like and ignoring the remainder.”

Extreme aversion

William Poundstone writing in his book Priceless:

“Extending the work of Huber and Puto, A 1992 the paper by [Amos] Tversky and Itamar Simonson laid down two commandments of manipulative retail. One is extreme aversion. They showed through surveys (involving Minolta cameras, Cross pens, microwave ovens, tyres, computers, and kitchen roll) that when consumers are uncertain, they shy away from the most expensive item offered or the least expensive; the highest quality or the lowest quality; the biggest or the smallest. Most favour something in the middle. Ergo, the way to sell a lot of £500 shoes is to display some £800 shoes next to them.”

Products that don’t sell effect those that do.

Zipf’s law

Nassim Nicholas Taleb writing in his book Black Swan:

“During the 1940s, a Harvard linguist, George Zipf, examined the properties of language and came up with an empirical regularity now known as Zipf’s law, which, of course, is not a law (and if it were, it would not be Zipf’s). It is just another way to think about the process of inequality. The mechanisms he described were as follows: The more you use a word, the less effort for you will find it to use that word again, so you borrow words from your private dictionary in proportion to their past use. This explains why out of the sixty thousand main words in English, only a few hundred constitute the bulk of what is used in writings, and even fewer appear regularly in conversation.”