“‘Priming’ is a fairly new term for phenomena that have long been part of the world’s store of knowledge, not necessarily of the scientific kind. Have you ever bought a car and suddenly noticed that ‘everyone’ on the motorway, practically, is driving that model? Have you ever learned a new word (or heard of an obscure sea mammal or an ethnic dance) and then encountered it several times in the space of a few days? You come across it in the news, you overhear it mentioned on the bus and on the radio, and the old issue of National Geographic your summing through falls open to an article on it…
This is priming (fortified with a few low-grade coincidences). When you skim the newspaper, half-listen to TV, or drive on the motorway, you ignore most of what’s going on around you. Only a few things command attention. Paradoxically, it is unconscious processes that choose which stimuli to pass on to full consciousness. Prior exposure to something (priming) lowers the threshold of attention, so that that something is more likely to be noticed. The upshot is that you have probably encountered your ‘new’ word or car many times before. It’s just that now you’re noticing.”
“The überphilosopher Bertrand Russell presents a particularly toxic variant of my surprise jolt in his illustration of what people in his line of business call the Problem of Induction or Problem of Inductive Knowledge.
How can we logically go from specific instances to reach general conclusions? How do we know what we know? How do we know what we have observed from given objects and events suffices to enable us to figure out their other properties? There are traps built into any kind of knowledge gained from observation.
Consider a turkey that is fed every day. Every single feeding will firm up the bird’s belief that it is the general rule of life to be fed every day by friendly members of the human race “looking out for its best interests,” as a politician would say. On the afternoon of the Wednesday before Thanksgiving, something unexpected will happen to the turkey. It will incur a revision of belief.
How can we know the future, given knowledge of the past; or, more generally, how can we figure out the properties of the (infinite) unknown based on the (finite) known? Think of the feeding again: what can our turkey learn about what is in store for it tomorrow from the events of yesterday? A lot, perhaps, but certainly a little less than it thinks, and it is just that “little less” that may make all the difference.
Let us go one step further and consider induction’s most worrisome aspect: learning backward. Consider that the turkey’s experience may have, rather than no value, a negative value. It’s learned from observation, as we are all advised to do (hey, after all, this is what is believed to be the scientific method). Its confidence increased as the number of friendly feedings grew, and it felt increasingly safe even though the slaughter was more and more imminent. Consider that the feeling of safety reached its maximum when the risk was at the highest! But the problem is even more general than that; it strikes at the nature of empirical knowledge itself. Something has worked in the past, until – well, it unexpectedly no longer does, and what we have learned from the past turns out to be at best irrelevant or false, at worst viciously misleading.”
The past cannot always predict the future.
No wonder economic predictions usually fail.
Let me start with the economist Sherwin Rosen. In the early 80s, he wrote the papers about “the economics of superstars.” In one of the papers he conveyed his sense of outrage that a basketball player could earn $1.2 million a year, or a television celebrity could make $2 million. To get an idea of how this concentration is increasing – i.e., of how we are moving away from Mediocristan – considerate that television celebrities and sports stars brackets (even in Europe) get contracts today, only two decades later, where in the hundreds of millions of dollars! The extreme is about brackets (so far) 20 times higher than it was two decades ago!
According to Rosen, this inequality comes from a tournament effect: someone who is marginally “better” can easily win the entire pot, leaving the others with nothing. Using an argument from chapter 3 people prefer to pay $10.99 for a recording featuring Horowitz to $9.99 for a struggling pianist. Would you rather read Kundera for $13.99 or some unknown author for $1. So it’s looks like a 20 minutes, where the winner grabs the whole thing – and he does not have to win by much.
“That’s a shame because there are two ways to position a brand: about and versus. In the ‘about’ approach we promote the features and, occasionally, the benefits of our brand to target customers. Positioning is all about the company C, us, and the customer C, them.
The versus position is one in which we make it clear what we stand for to customers by highlighting the differences between ourselves and others.
In the other approach, the less common ‘versus’ approach, we still focus on what the customer wants that we can deliver. However, to communicate the message more strongly, we pick out a specific competitor and position our brand against them as overly and aggressively as possible. The point of the ‘versus’ positioning is not simply to aggressively slight your rivals; it’s more nuanced than that.
The versus position is one in which we make it clear what we stand for to customers by highlighting the differences between ourselves and others.”
James Reason, The scholar of catastrophe who uses Nick Leeson and Barings Bank as a case study to help engineers prevent accidents, is careful to distinguish between three different types of error. The most straightforward are slips, when through clumsiness or lack of attention you do something you simply didn’t mean to do. In 2005, a young Japanese trader tried to sell one share at a price of ¥600,000 and instead sold 600,000 shares at the bargain price of ¥1. Traders call these slips ‘fat finger errors’ and this one cost £200 million.
Then there are violations, which involve someone deliberately doing the wrong thing. Bewildering accounting tricks like those employed at Enron, or the cruder fraud of Bernard Madoff, are violations, and the incentives for them are much greater in finance than in industry.
Most insidious are mistakes. Mistakes are things you do on purpose, but with unintended consequences, because your mental model of the world is wrong. When the supervisors at Piper Alpha switched on a dismantled pump, they made a mistake in this sense. Switching on the pump was what they intended, and they followed all the correct procedures. The problem was that their assumption about the pump, which was that it was all in one piece, was mistaken.
Irving Janis’s classic analysis of the Bay of Pigs and other foreign policy fiascoes, Victims of GroupThink, explains that A strong team – a ‘kind of family’ – Can quickly forward into the habit of reinforcing each other’s prejudices out of simple teen spirit and a desire to bolster the group.
This reminds me of the concept of Echo Chambers, where a person’s beliefs are strengthened when they are exposed only to opinions that align with their own.
To break free of GroupThink and Echo Chambers we need to seek out contradictory, contrarian or disconfirmatory opinions. We need to try and falsify our current position in order to progress towards a truth.
Janis details the way in which John F. Kennedy fooled himself into thinking that he was gathering a range of views and critical comments. All the while his team of advisors where unconsciously giving each other a false sense of infallibility. Later, during the Cuban Missile Crisis, Kennedy was far more aggressive about demanding alternative options, exhaustively exploring risks, and breaking up his advisory groups to ensure that they didn’t become too comfortable.
Beth Singlet writing for Aeon:
My stomach sank the moment the young man stood up. I’d observed him from afar during the coffee breaks, and I knew the word ‘Theologian’ was scrawled on the delegate badge pinned to his lapel, as if he’d been a last-minute addition the conference. He cleared his throat and asked the panel on stage how they’d solve the problem of selecting which moral codes we ought to program into artificially intelligent machines (AI). ‘For example, masturbation is against my religious beliefs,’ he said. ‘So I wonder how we’d go about choosing which of our morals are important?’
The audience of philosophers, technologists, ‘transhumanists’ and AI fans erupted into laughter. Many of them were well-acquainted with the so-called ‘alignment problem’, the knotty philosophical question of how we should bring the goals and objectives of our AI creations into harmony with human values. But the notion that religion might have something to add to the debate seemed risible to them. ‘Obviously we don’t want the AI to be a terrorist,’ a panellist later remarked. Whatever we get our AI to align with, it should be ‘nothing religious’.
Divya Abhat writing for The Atlantic:
“Allison Sheridan couldn’t care less about music. Songs of love and heartbreak don’t bring her to tears, complex classical compositions don’t amaze her, peppy beats don’t make her want to dance. For Sheridan, a retired engineer, now a podcaster, who owns 12 vinyl records and hasn’t programed the radio stations in her car, “music sits in an odd spot halfway between boring and distracting.”
Despite coming from a tremendously musical family, Sheridan is part of the roughly 3 to 5 percent of the world’s population that has an apathy toward music. It’s what’s referred to as specific musical anhedonia—different from general anhedonia, which is the inability to feel any kind of pleasure and which is often associated with depression. In fact, there’s nothing inherently wrong with musical anhedonics; their indifference to music isn’t a source of depression or suffering of any kind, although Sheridan notes, “The only suffering is being mocked by other people, because they don’t understand it. Everybody loves music, right?”
Previous research shows that the vast majority of people who enjoy music show an increase in heart rate or skin conductance—where a person’s skin temporarily becomes a conductor of electricity in response to something they find stimulating. Musical anhedonics, however, show no such physiological change to music.”
Nassim Nicholas Taleb writing in The Black Swan:
“We like stories, we like to summarise, and we like to simplify, i.e., to reduce the dimension of matters. The first of the problems of human nature that we examine in the section … is what I call the narrative fallacy. The fallacy is a so stated with our vulnerability to overinterpretation and our predilection for compact stories over raw truths. It severely distorts our mental representation of the world; it is particularly acute when it comes to the rare event.”
“The narrative fallacy addresses our very limited ability to look at sequences of facts without weaving an explanation into them, or, equivalently, forcing a logical link, an arrow of relationship, upon them. Explanations bind facts together. They make them all the more easily remembered; they help them make more sense. Where is this propensity can go wrong is when it increases our impression of understanding.”
The world is complex. Even simple scenarios are difficult to predict. Few experts understand it. And yet we string cherry picked facts together to form nice, neat stories. Stories with beginnings, middles and ends. Causes and effects. We construct cohesion out of complexity. We aim for simple and achieve simplistic.
We shouldn’t stop. But we should understand the limits of our understanding.
Elizabeth Kolbert, writing in The New Yorker:
“Consider what’s become known as “confirmation bias,” the tendency people have to embrace information that supports their beliefs and reject information that contradicts them. Of the many forms of faulty thinking that have been identified, confirmation bias is among the best catalogued; it’s the subject of entire textbooks’ worth of experiments.”
“[Hugo] Mercier and [Dan] Sperber prefer the term “myside bias.” Humans, they point out, aren’t randomly credulous. Presented with someone else’s argument, we’re quite adept at spotting the weaknesses. Almost invariably, the positions we’re blind about are our own.
A recent experiment performed by Mercier and some European colleagues neatly demonstrates this asymmetry. Participants were asked to answer a series of simple reasoning problems. They were then asked to explain their responses, and were given a chance to modify them if they identified mistakes. The majority were satisfied with their original choices; fewer than fifteen per cent changed their minds in step two.
In step three, participants were shown one of the same problems, along with their answer and the answer of another participant, who’d come to a different conclusion. Once again, they were given the chance to change their responses. But a trick had been played: the answers presented to them as someone else’s were actually their own, and vice versa. About half the participants realized what was going on. Among the other half, suddenly people became a lot more critical. Nearly sixty per cent now rejected the responses that they’d earlier been satisfied with.”
For me, there’s a subtle distinction between confirmation bias and myside bias.
Confirmation bias skews the way we process new information. Myside bias skews the way we critique existing views.