The Fermi paradox

Derek Thompson writing for The Atlantic:

Enrico Fermi was an architect of the atomic bomb, a father of radioactivity research, and a Nobel Prize–winning scientist who contributed to breakthroughs in quantum mechanics and theoretical physics. But in the popular imagination, his name is most commonly associated with one simple, three-word question, originally meant as a throwaway joke to amuse a group of scientists discussing UFOs at the Los Alamos lab in 1950: Where is everybody?

Fermi wasn’t the first person to ask a variant of this question about alien intelligence. But he owns it. The query is known around the world as the Fermi paradox. It’s typically summarized like this: If the universe is unfathomably large, the probability of intelligent alien life seems almost certain. But since the universe is also 14 billion years old, it would seem to afford plenty of time for these beings to make themselves known to humanity. So, well, where is everybody?

Hedgehogs and foxes

There is a fragment of wisdom that is often attributed to the Ancient Greek poet Archilochus:

“The fox knows many things, but the hedgehog knows one big thing.”

Writing in his essay on Leo Tolstoy, The Hedgehog and the Fox, Isaiah Berlin expands Archilochus’ passage into a broad concept of two types of thinkers:

“There exists a great chasm between those, on one side, who relate everything to a single central vision, one system, less or more coherent or articulate, in terms of which they understand, think and feel – a single, universal, organising principle in terms of which alone all that they are and say has significance – and, on the other side, those who pursue many ends, often unrelated and even contradictory, connected, if at all, only in some de facto way, for some psychological or physiological cause, related to no moral or aesthetic principle.”

In The Signal and the Noise, the statistician Nate Silver expanded on this again, and gave some illuminating examples:

“Unless you are a fan of Tolstoy—or of flowery prose—you’ll have no particular reason to read Berlin’s essay. But the basic idea is that writers and thinkers can be divided into two broad categories:

  • Hedgehogs are type A personalities who believe in Big Ideas—in governing principles about the world that behave as though they were physical laws and undergird virtually every interaction in society. Think Karl Marx and class struggle, or Sigmund Freud and the unconscious. Or Malcolm Gladwell and the “tipping point.”
  • Foxes, on the other hand, are scrappy creatures who believe in a plethora of little ideas and in taking a multitude of approaches toward a problem. They tend to be more tolerant of nuance, uncertainty, complexity, and dissenting opinion. If hedgehogs are hunters, always looking out for the big kill, then foxes are gatherers.

Hedgehogs believe in a single, neat, unified approach. Foxes believe in many, messy, disparate approaches.

Or as Karl Weick might put it, hedgehogs and foxes are specialists and generalists.

The four revolutions of consciousness

Tom Chatfield writing for The Guardian:

For the philosopher of technology Luciano Floridi, there have been four recent revolutions in human consciousness. First, Copernicus and Galileo demonstrated that the Earth was not the unique, unmoving center of our universe. Like the other planets, it orbited the sun; and these planets in turn were orbited by their satellites, indifferent to human claims of exceptionalism. Second, Darwin showed us humanity not as the fixed pinnacle of a hierarchical creation, but as one among countless lifeforms produced by blind selection. Third, Freud suggested that we are far from transparent even to ourselves – that our self-knowledge is at best tenuous and provisional.

Each of these revolutions is in a sense a demotion: a revision downwards of our place in the order of things. We are neither the lords of creation nor even masters of our own minds. What’s next to lose? The fourth revolution, Floridi suggests, is one in which we must surrender our claim to be the universe’s sole site of analysis and insight. Our creations approach or exceed our capabilities in areas long believed to be uniquely human: deduction, recall, reasoning, pattern recognition, the processing of language, the modelling and prediction of the world.

Beautiful. Humbling.

The two types of reputation: capability and character

Ian Leslie writing in the New Statesmen:

The Reputation Game is written by two people from the PR business, David Waller and Rupert Younger. They introduce a useful distinction between two types of reputation: capability and character. The first refers to competence in a specific task, such as cooking a meal, providing mortgages, or making aircraft engines. The second refers to moral or social qualities. Someone can have a great reputation for competence, while at the same time being regarded as slippery or unpleasant. Uber is good at what it does, but you wouldn’t invite it home to meet your mother.

Capability reputations are sticky: they take a long time to wash away. An author who writes great novels early in his career can produce many mediocre ones before people start to question if he is any good (naming no names, Salman Rushdie). Character reputations are more flammable, especially in a world where social media can instantly detonate bad news. A strong reputation for competence defends you against character problems, but only for so long, as Uber is finding out. When your character reputation is destroyed, competence becomes immaterial.

Black Swans

The Black Swan, is a book by the author and essayist Nassim Nicholas Taleb.

It’s a far-reaching piece that calls into question the strength of existing information and the validity of its use in forecasting changes.

The book’s subject matter is so broad, and its implications so varied, that no summary could ever really do it justice.

As such, I shall only attempt to distill its most central theme. The opening few pages offer a good place to start:

“Before the discovery of Australia, people in the whole world were convinced that all swans were white, an unassailable belief as it seemed completely confirmed by empirical evidence. The sighting of the first black swan might have been an interesting surprise for a few ornithologists (and others extremely concerned with the colouring of birds), but that is not where the significance of the story lies. It illustrates a severe limitation to our learning from observations or experience and the fragility of our knowledge. One single observation can invalidate a general statement derived from millennia of confirmatory sightings of millions of white swans. All you need is one single (and, I am told, quite ugly) black bird.”

Many white swan sightings led to a general theory that all swans were white. Every subsequent sighting confirmed and enforced this belief. This continued for thousands of years, with each new sighting adding a new data point in support of the theory. The theory grew stronger and belief in it trended towards the absolute.

And then a black swan was spotted.

That one sighting disproved a theory built on a millenia of confirmatory evidence.

The value of one disconfirmatory sighting was greater than the value of a million confirmatory sightings.

The implication is significant:

“Black Swan logic makes what you don’t know far more relevant than what you do know.”

Clearly we we should embrace the fact that we don’t know everything. We should try to uncover the unknown unknowns. We should hunt for information which falsifies our beliefs and be wary of that which confirms them.

In other words, we should try to find the ‘black swans’.

Taleb defines these ‘black swans’ with the following three criteria:

“First, it is an outlier, as it lies outside the realm of regular expectations, because nothing in the past can convincingly point to its possibility. Second, it carries an extreme impact (unlike the bird). Third, in spite of its outlier status, human nature makes us concoct explanations for its occurrence after the fact, making it explainable and predictable.”

Clearly, despite our best efforts to seek them out, knowledge-changing, ‘black swans’ are inherently extremely difficult to uncover or predict.

This is known as the problem of induction. Confirmatory evidence lulls us into a false sense of security. It builds an unjustified sense of surety. We believe that what has happened before will continue to happen in the future. The more something happens in the past, the more we believe it will continue to happen. The turkey who gets fed every morning becomes increasingly confident that this will continue to occur. All evidence points in the same direction. And it does, right up until it doesn’t.

This is why even experts fail to forecast with any real degree of accuracy. They construct their predictions based on a history of stable, consistent and well aligned data, completely oblivious of a huge disruptive event lurking just around the corner.

For example how many experts correctly foresaw the 1987 stock market crash? How many predictions had been made that had failed to account for such a seismic event?

And when one comes, the experts post-rationalise it. They claim it was predictable and expected. They fall for the narrative fallacy. They reduce a chaotic, tightly coupled and complex set of events down into a simple and digestible, but woefully inadequate, explanation.

To avoid black swan traps we need to rethink how we assess information, construct opinions, and make predictions. No small task. But we can start with a basic principle of critical thinking: know that you don’t know.

Kaizen and the five whys

The following is taken from “Stop problem solving” by Gareth Kay.

“As part of its effort to reinvigorate itself, Toyota introduced the approach of kaizen (simply meaning ‘change for better’). Overall, this was about ensuring continuous improvement but one of its key tenets was the Five Whys technique. Taiichi Ohno [pictured], the architect of the Toyota Production System in the 1950s, encouraged his team to “observe the production floor without preconception. Ask ‘why’ five times about every matter … by repeating why five times, the nature of the problem as well as its solution becomes clear.” He goes on to offer the example of a robot stopping. By asking why five times, the problem to be solved becomes clearer: no filter on the oil pump, rather than an overloaded circuit to which initial analysis would point. The ability to ask why until you can ask why no more is an incredibly important skill we forget far too often. When we do this, we begin to find the real, root problem we need to solve, rather than the symptom that is far too frequently the result of the typical problem-solving approach.”