Musical anhedonia

Divya Abhat writing for The Atlantic:

“Allison Sheridan couldn’t care less about music. Songs of love and heartbreak don’t bring her to tears, complex classical compositions don’t amaze her, peppy beats don’t make her want to dance. For Sheridan, a retired engineer, now a podcaster, who owns 12 vinyl records and hasn’t programed the radio stations in her car, “music sits in an odd spot halfway between boring and distracting.”

Despite coming from a tremendously musical family, Sheridan is part of the roughly 3 to 5 percent of the world’s population that has an apathy toward music. It’s what’s referred to as specific musical anhedonia—different from general anhedonia, which is the inability to feel any kind of pleasure and which is often associated with depression. In fact, there’s nothing inherently wrong with musical anhedonics; their indifference to music isn’t a source of depression or suffering of any kind, although Sheridan notes, “The only suffering is being mocked by other people, because they don’t understand it. Everybody loves music, right?”

Previous research shows that the vast majority of people who enjoy music show an increase in heart rate or skin conductance—where a person’s skin temporarily becomes a conductor of electricity in response to something they find stimulating. Musical anhedonics, however, show no such physiological change to music.”

The Narrative Fallacy

Nassim Nicholas Taleb writing in The Black Swan:

“We like stories, we like to summarise, and we like to simplify, i.e., to reduce the dimension of matters. The first of the problems of human nature that we examine in the section … is what I call the narrative fallacy. The fallacy is a so stated with our vulnerability to overinterpretation and our predilection for compact stories over raw truths. It severely distorts our mental representation of the world; it is particularly acute when it comes to the rare event.”

Later:

“The narrative fallacy addresses our very limited ability to look at sequences of facts without weaving an explanation into them, or, equivalently, forcing a logical link, an arrow of relationship, upon them. Explanations bind facts together. They make them all the more easily remembered; they help them make more sense. Where is this propensity can go wrong is when it increases our impression of understanding.”

The world is complex. Even simple scenarios are difficult to predict. Few experts understand it. And yet we string cherry picked facts together to form nice, neat stories. Stories with beginnings, middles and ends. Causes and effects. We construct cohesion out of complexity. We aim for simple and achieve simplistic.

We shouldn’t stop. But we should understand the limits of our understanding.

 

Myside bias

Elizabeth Kolbert, writing in The New Yorker:

“Consider what’s become known as “confirmation bias,” the tendency people have to embrace information that supports their beliefs and reject information that contradicts them. Of the many forms of faulty thinking that have been identified, confirmation bias is among the best catalogued; it’s the subject of entire textbooks’ worth of experiments.”

Later:

“[Hugo] Mercier and [Dan] Sperber prefer the term “myside bias.” Humans, they point out, aren’t randomly credulous. Presented with someone else’s argument, we’re quite adept at spotting the weaknesses. Almost invariably, the positions we’re blind about are our own.

A recent experiment performed by Mercier and some European colleagues neatly demonstrates this asymmetry. Participants were asked to answer a series of simple reasoning problems. They were then asked to explain their responses, and were given a chance to modify them if they identified mistakes. The majority were satisfied with their original choices; fewer than fifteen per cent changed their minds in step two.

In step three, participants were shown one of the same problems, along with their answer and the answer of another participant, who’d come to a different conclusion. Once again, they were given the chance to change their responses. But a trick had been played: the answers presented to them as someone else’s were actually their own, and vice versa. About half the participants realized what was going on. Among the other half, suddenly people became a lot more critical. Nearly sixty per cent now rejected the responses that they’d earlier been satisfied with.”

For me, there’s a subtle distinction between confirmation bias and myside bias.

Confirmation bias skews the way we process new information. Myside bias skews the way we critique existing views.

The illusion of explanatory depth

Elizabeth Kolbert writing in The New Yorker:

“Virtually everyone in the United States, and indeed throughout the developed world, is familiar with toilets. A typical flush toilet has a ceramic bowl filled with water. When the handle is depressed, or the button pushed, the water—and everything that’s been deposited in it—gets sucked into a pipe and from there into the sewage system. But how does this actually happen?

In a study conducted at Yale, graduate students were asked to rate their understanding of everyday devices, including toilets, zippers, and cylinder locks. They were then asked to write detailed, step-by-step explanations of how the devices work, and to rate their understanding again. Apparently, the effort revealed to the students their own ignorance, because their self-assessments dropped. (Toilets, it turns out, are more complicated than they appear.)

[Steven] Sloman and [Philip] Fernbach see this effect, which they call the “illusion of explanatory depth,” just about everywhere. People believe that they know way more than they actually do. What allows us to persist in this belief is other people. In the case of my toilet, someone else designed it so that I can operate it easily. This is something humans are very good at. We’ve been relying on one another’s expertise ever since we figured out how to hunt together, which was probably a key development in our evolutionary history. So well do we collaborate, Sloman and Fernbach argue, that we can hardly tell where our own understanding ends and others’ begins.”

We believe that we know more than we do. And trying to explain reveals our ignorance.

Or as Einstein said:

“If you can’t explain it to a six year old, you don’t understand it yourself.”

 

Rational bias

Nate Silver writing in The Signal and The Noise:

“When you have your name attached to a prediction your incentives may change. For instance, if you work for a poorly known firm, it may be quite rational for you to make some wild forecasts that will draw big attention when they happen to be right, even if they aren’t going to be right very often. Firms like Goldman Sachs, on the other hand, might be more conservative in order to stay within the consensus.

Indeed, this exact property has been identified in the blue chip forecasts: one study terms the phenomenon “rational bias”. The less reputation you have, the less you have to lose by taking a big risk when you make a prediction. Even if you know the forecast is dodgy, it might be rational for you to go after the big score. Conversely, if you have already established a good reputation, you might be reluctant to step too far far out of line even when you think the data demands it.”

The greater your reputation, the more conservative you are.

Bertillonage

Matt Wolfe writing for New Republic:

“Ever since the old practice of marking convicts by mutilating their ears or branding their skin had been abolished, police had lacked a reliable system to determine if a person had a criminal record. Some larger departments assembled photographic rogues’ galleries, but photos were imprecise: People could look alike, or alter their appearance. And matching a suspect to his rap sheet might require leafing, one by one, through hundreds or thousands of pictures. Police chiefs began offering cash bonuses to any officer who successfully recognized a felon.

A solution was engineered in 1881 by Alphonse Bertillon, a sickly, 26-year-old record keeper for the Parisian police. The black sheep in a family of social science luminaries—his father had co-founded the School of Anthropology in Paris—Bertillon worked in the basement at police headquarters, tasked with transcribing physical descriptions of criminals, most of them incomplete or ambiguous. Frustrated, he sought to apply to his work some of his family’s scientific precision. Using a pair of calipers, he studied the physical attributes of prison inmates and found eleven measures unlikely to alter with age or a change in weight, such as sitting height, arm span, and the length and breadth of the head. He concluded that, while two people might share one measurement, the odds of sharing eleven were infinitesimally small. These figures—human flesh rendered into numbers—were placed on a single index card. This system, known as Bertillonage, was quickly put into wide use. An officer could now locate a suspect’s file in minutes, verifying his identity and hitching him, eternally, to his criminal record.”

Statistics and true crime. What’s not to like?

Most advanced yet acceptable

Derek Thompson writing in The Atlantic:

“[Raymond] Loewy had an uncanny sense of how to make things fashionable. He believed that consumers are torn between two opposing forces: neophilia, a curiosity about new things; and neophobia, a fear of anything too new. As a result, they gravitate to products that are bold, but instantly comprehensible. Loewy called his grand theory “Most Advanced Yet Acceptable”—maya. He said to sell something surprising, make it familiar; and to sell something familiar, make it surprising.”

I love that last line.

Mere-exposure effect

Derek Thompson writing for The Atlantic:

“In the 1960s, the psychologist Robert Zajonc [pictured] conducted a series of experiments where he showed subjects nonsense words, random shapes, and Chinese-like characters and asked them which they preferred. In study after study, people reliably gravitated toward the words and shapes they’d seen the most. Their preference was for familiarity.

This discovery was known as the “mere-exposure effect,” and it is one of the sturdiest findings in modern psychology. Across hundreds of studies and meta-studies, subjects around the world prefer familiar shapes, landscapes, consumer goods, songs, and human voices. People are even partial to the familiar version of the thing they should know best in the world: their own face. Because you and I are used to seeing our countenance in a mirror, studies show, we often prefer this reflection over the face we see in photographs. The preference for familiarity is so universal that some think it must be written into our genetic code. The evolutionary explanation for the mere-exposure effect would be simple: If you recognized an animal or plant, that meant it hadn’t killed you, at least not yet.”

We like what we know.

Dissociative amnesia

Matt Wolfe writing in the New Republic:

“The first psychologist to provide a reliable account of a man who had misplaced his identity was William James. In his Principles of Psychology, James narrates the case of Ansel Bourne, a 60-year-old carpenter from Greene, Rhode Island. On January 17, 1887, Bourne boarded a horse-drawn streetcar bound for his sister’s house. He never arrived. Two months later, a man named A.J. Brown awoke in a panic. Brown had arrived in Norristown, Pennsylvania, six weeks before, rented a small shop, and hung out his shingle. He sold candy and toys, made weekly trips to Philadelphia to replenish his stock, and attended a Methodist church on Sundays. Yet now his bed looked unfamiliar. Waking his landlord, Brown demanded to know where he was and how he got there. Brown declared that his name was not A.J. Brown—of whom he knew nothing—but Ansel Bourne. The baffled landlord telegraphed a man in Providence who Brown said was his nephew. The nephew hurried to the scene and confirmed, to general perplexity, that Brown was Bourne. A despondent Bourne claimed to lack any memory of the previous eight weeks. The last thing he recalled was the streetcar.

James labeled the case a “spontaneous hypnotic trance.” Today, it would be called a fugue. The word fugue comes from the Latin fugere, meaning “to flee.” A person in a fugue state suffers a kind of involuntary erasure of individuality. Often, people in fugues use pseudonyms and construct fictitious personal histories. They act mostly normally, though for inexplicable reasons, they generally abstain from sex. Some fugues are peripatetic, causing people to travel long distances. In one study, fugue sufferers migrated a mean distance of 1,200 miles. They are oblivious to their condition until someone tells them, at which point a cognitive crisis usually ensues. Fugues depart as mysteriously as they arrive. Some resolve after a few hours or days; others endure for months or years. Afterward, patients find themselves restored, gradually. Their old identities return, intact, though they remember nothing of their mesmeric episode.

The Diagnostic Statistical Manual of Mental Disorders classifies fugues as a catastrophic form of dissociative amnesia. Sometimes, amnesia can act as a kind of circuit breaker, cutting off the input of traumatic thoughts to spare the brain further suffering. A fugue is thought to be an exaggerated version of this impulse. Dissociative amnesia is common in combat veterans, survivors of natural disasters, and victims of prolonged physical abuse, particularly abuse experienced as children. But unlike other amnesias, a fugue occludes not just the memory of events, but of the person who endured them.”

The sterile-insect technique

Michael Specter writing in the New Yorker.

“Scientists have been trying to use the tools of genetics to control pests almost since the day, in 1953, when James Watson and Francis Crick described how the language of life is written in four chemical letters—adenine, cytosine, guanine, and thymine. In 1958, the American entomologists Edward F. Knipling and Raymond C. Bushland proposed a novel approach to eliminating the screwworm (Cochliomyia hominivorax), the only insect known to eat the live flesh of warm-blooded animals. The screwworm has infested cattle for centuries, and it can kill a cow in less than two weeks. Employing radiation, which served as a crude but effective form of birth control, Knipling and Bushland sterilized millions of male screwworms. They released them to mate with females, who would then lay sterile eggs. Known as sterile-insect technique, it has been used widely ever since. Two years later, Knipling published an article, in the Journal of Economic Entomology, in which he suggested that it would be possible to use the same approach to force malarial mosquitoes and other pests to destroy themselves. Such a proposal would have required the release of billions of sterile mosquitoes, which, at the time, was not possible.”