COG Model of human motivation

Matt Willifer writing for APG:

COG has used neuroscience to identify the six – and only six – fundamental human motivations. That is the real emotional truths that get people to actually do something. If your brand can powerfully play to one of these then you’re in a good place.

  • Security: care, trust, closeness, security, warmth
  • Enjoyment: relaxation, fun, openness, pleasur
  • Excitement: vitality, fun, curiosity, creativity, change
  • Adventure: freedom, courage, rebellion, discovery, risk
  • Autonomy: pride, success, power, superiority, recognition
  • Discipline: precision, order, logic, reason

The pratfall effect

Richard Shotton writing on Mumbrella:

The power of flaws was first discovered in 1966 by Harvard University psychologist Elliot Aronson. In his experiment Aronson recorded an actor answering a series of quiz questions. In one strand of the experiment, the actor – armed with the right responses – answers 92% of the questions correctly. After the quiz, the actor then pretends to spill a cup of coffee over himself (a small blunder, or pratfall).

The recording was played to a large sample of students, who were then asked how likeable the contestant was. However, Aronson split the students into cells and played them different versions: one with the spillage included and one without. The students found the clumsy contestant more likeable.

Aronson called this insight the pratfall effect.

The two types of reputation: capability and character

Ian Leslie writing in the New Statesmen:

The Reputation Game is written by two people from the PR business, David Waller and Rupert Younger. They introduce a useful distinction between two types of reputation: capability and character. The first refers to competence in a specific task, such as cooking a meal, providing mortgages, or making aircraft engines. The second refers to moral or social qualities. Someone can have a great reputation for competence, while at the same time being regarded as slippery or unpleasant. Uber is good at what it does, but you wouldn’t invite it home to meet your mother.

Capability reputations are sticky: they take a long time to wash away. An author who writes great novels early in his career can produce many mediocre ones before people start to question if he is any good (naming no names, Salman Rushdie). Character reputations are more flammable, especially in a world where social media can instantly detonate bad news. A strong reputation for competence defends you against character problems, but only for so long, as Uber is finding out. When your character reputation is destroyed, competence becomes immaterial.

Anchoring and adjustment

William Poundstone writing in his book Priceless:

Daniel Kahneman and Amos Tversky “used one piece of apparatus, a carnival-style wheel of fortune marked with numbers up to 100. A group of university students watched as the wheel was spun to select a random number. You can play along – imagine that the wheel is spinning right now and the number is… 65. Now answer this two-part question:

(a) Is the percentage of African nations in the United Nations higher or lower than 65 (the number that just came up on the wheel)?

(b) What is the percentage of African nations in the United Nations?

Like many experiments, and some wheels of fortune, this one was rigged. The wheel was designed to produce one of only two numbers, 10 and 65. This rigging was done only to simplify analysis of the results. In any event, Tversky and Kahneman found that the allegedly random number affected the answers to the second question. The effect was huge.

When the wheel stopped on 10, the average estimates of the proportion of African nations in the UN was 25 percent. But when the wheel of fortune number was 65, the average guess was 45 percent. The latter estimate was almost twice the first. The only difference was that the estimates had been exposed to a different ‘random’ number that they knew to be meaningless.

Tversky and Kahneman used the term ‘anchoring and adjustment’ for this. In there now classic 1974 Science article, ‘Judgement Under Uncertainty: Heuristics and Biases,’ they theorised that an initial value (the ‘anchor’) serves as a mental benchmark or starting point for estimating an unknown quantity. Here, the wheel of fortune number was the anchor. The first part of the question that the subjects compare the anchor to the quantity to be estimated. Tversky believed that the subject then mentally adjusted the anchor upward or downward to arrive at their answers to the second part of the question. This adjustment was usually inadequate. The answer ended up being closer to the anchor than it should be. To someone inspecting only the final outcomes, it’s as if the anchor exerts a magnetic attraction, pulling estimates closer to itself.

By the way, how did your answer compare to the 65-group’s average of 45 percent? In case you’re wondering, the correct fraction of African UN member nations is currently 23 percent.”

Nassim Nicholas Taleb summarises this experiment and provides some further examples in his book The Black Swan:

“Kahneman and Tversky had their subjects spin a wheel of fortune. The subjects first looked at the number on the wheel, which they knew was random, then they were asked to estimate the number of African countries in the United Nations. Those who had a low number on the wheel estimated a low number of African nations; those with a high number produced a high estimate.

Similarly, ask someone to provide you with the last four digits of his Social Security number. Then ask him to estimate the number of dentists in Manhattan. You will find that by making him aware of the four digit number, you will ellicit an estimate that is correlated with it.

We use reference points in our heads … and start building beliefs around them because less mental effort is needed to compare them with to a reference point than to evaluate it in the absolute.”

Confirmation bias

Nassim Nicholas Taleb, writing in The Black Swan:

“Cognitive scientists have studied our natural tendency to look only for corroboration; they call this vulnerability to the collaboration error the confirmation bias.

The first experiment I know of concerning this phenomenon was done by the psychologist P. C. Wason. He presented the subjects with the three number sequence 2, 4, 6, and asked them to try to guess the rule generating it. There method of getting us to produce other three number sequences, to which the experimenter would respond “yes” or “No” depending on whether the new sequences were consistent with the rule. Once confident with their answers, the subjects would formulate the rule. … with the correct rule was “numbers in ascending order,” nothing more. Very few subjects discovered it because in order to do so I had to offer a series in descending order (what the experimenter word say “no” to). Wason noticed that the subjects had a role in mind, but gave him examples aimed at confirming it instead of trying to supply series that were inconsistent with their hypothesis. Subjects tenaciously kept trying to confirm the rules that they had made up.”

Peter Cathcart Wason‘s 2-4-6 problem proved that we have a tendency to seek out evidence which confirms our ideas rather than that which falsifies them.

Alvin Toffler echoed a sentiment, and borrowed a fair amount from Leon Ferstinger‘s concept of cognitive dissonance, 10 years later.

Nate Silver writing in The Signal and The Noise:

“Alvin Toffler, writing in the book Future Shock in 1970, predicted some of the consequences of what he called “information overload”. He thought our defence mechanism would be to simplify the world in ways that confirmed our biases, even as the world itself was growing more diverse and more complex.”

Silver later sums up the bias succinctly:

“The instinctual shortcut that we take when we have ‘too much information’ is to engage with it selectively, picking out the parts we like and ignoring the remainder.”

Extreme aversion

William Poundstone writing in his book Priceless:

“Extending the work of Huber and Puto, A 1992 the paper by [Amos] Tversky and Itamar Simonson laid down two commandments of manipulative retail. One is extreme aversion. They showed through surveys (involving Minolta cameras, Cross pens, microwave ovens, tyres, computers, and kitchen roll) that when consumers are uncertain, they shy away from the most expensive item offered or the least expensive; the highest quality or the lowest quality; the biggest or the smallest. Most favour something in the middle. Ergo, the way to sell a lot of £500 shoes is to display some £800 shoes next to them.”

Products that don’t sell effect those that do.

Priming

William Poundstone writing in his book Priceless:

“‘Priming’ is a fairly new term for phenomena that have long been part of the world’s store of knowledge, not necessarily of the scientific kind. Have you ever bought a car and suddenly noticed that ‘everyone’ on the motorway, practically, is driving that model? Have you ever learned a new word (or heard of an obscure sea mammal or an ethnic dance) and then encountered it several times in the space of a few days? You come across it in the news, you overhear it mentioned on the bus and on the radio, and the old issue of National Geographic your summing through falls open to an article on it…

This is priming (fortified with a few low-grade coincidences). When you skim the newspaper, half-listen to TV, or drive on the motorway, you ignore most of what’s going on around you. Only a few things command attention. Paradoxically, it is unconscious processes that choose which stimuli to pass on to full consciousness. Prior exposure to something (priming) lowers the threshold of attention, so that that something is more likely to be noticed. The upshot is that you have probably encountered your ‘new’ word or car many times before. It’s just that now you’re noticing.”

Group think

Tim Harford writing in his book Adapt:

Irving Janis’s classic analysis of the Bay of Pigs and other foreign policy fiascoes, Victims of GroupThink, explains that A strong team – a ‘kind of family’ – Can quickly forward into the habit of reinforcing each other’s prejudices out of simple teen spirit and a desire to bolster the group.

This reminds me of the concept of Echo Chambers, where a person’s beliefs are strengthened when they are exposed only to opinions that align with their own.

To break free of GroupThink and Echo Chambers we need to seek out contradictory, contrarian or disconfirmatory opinions. We need to try and falsify our current position in order to progress towards a truth.

Harford continues:

Janis details the way in which John F. Kennedy fooled himself into thinking that he was gathering a range of views and critical comments. All the while his team of advisors where unconsciously giving each other a false sense of infallibility. Later, during the Cuban Missile Crisis, Kennedy was far more aggressive about demanding alternative options, exhaustively exploring risks, and breaking up his advisory groups to ensure that they didn’t become too comfortable.

Rational bias

Nate Silver writing in The Signal and The Noise:

“When you have your name attached to a prediction your incentives may change. For instance, if you work for a poorly known firm, it may be quite rational for you to make some wild forecasts that will draw big attention when they happen to be right, even if they aren’t going to be right very often. Firms like Goldman Sachs, on the other hand, might be more conservative in order to stay within the consensus.

Indeed, this exact property has been identified in the blue chip forecasts: one study terms the phenomenon “rational bias”. The less reputation you have, the less you have to lose by taking a big risk when you make a prediction. Even if you know the forecast is dodgy, it might be rational for you to go after the big score. Conversely, if you have already established a good reputation, you might be reluctant to step too far far out of line even when you think the data demands it.”

The greater your reputation, the more conservative you are.