The limits of expertise

In his book ‘The Signal and the Noise’, the American statistician and writer, Nate Silver, references an interesting study conducted by Philip Tetlock (pictured).

“The forecasting models published by political scientists in advance of the 2000 presidential election predicted a landslide 11-point victory for Al Gore. George W. Bush won instead. Rather than being an anomalous result, failures like these have been fairly common in political prediction. A long-term study by Philip E. Tetlock of the University of Pennsylvania found that when political scientists claimed that a political outcome had absolutely no chance of occurring, it nevertheless happened about 15 percent of the time.”

Tim Harford expands upon this brief introduction in his book ‘Adapt: Why Success Always Starts with Failure’:

“Even deep expertise is not enough to solve today’s complex problems.

Perhaps the best illustration of this comes from an extraordinary two-decade investigation into the limits of expertise, begun in 1984 by a young psychologist called Philip Tetlock. He was the most junior member of a committee of the National Academy of Sciences charged with working out what the Soviet response might be to Reagan administration’s hawkish stance in the cold war. Would Reagan call the bluff of a bully or was he about to provoke a deadly reaction? Tetlock canvassed every expert he could find. He was struck by the fact that, again and again, the most influential thinkers on the Cold War flatly contradicted one another. We are so used to talking heads disagreeing that perhaps this doesn’t seem surprising But when we realise that the leading experts cannot agree on the most basic level about the key problem of the age, we begin to understand that this kind of expertise is far less useful than we might hope.

Tetlock didn’t leave it at that. He worries away at this question of expert judgment for twenty years. He rounded up nearly three hundred experts – by which he meant people whose job it was to comment or advise on political and economic trends. They were a formidable bunch: political scientists, economists, lawyers and diplomats. There were spooks and think-tankers, journalists and academics. Over half of them had PhDs; almost all had post-graduate degrees. And Tetlock’s method for evaluating the quality of their expert judgement was to pin the experts down: he asked them to make specific, quantifiable forecasts – answering 27450 of his questions between them – and then waited to see whether their forecasts came true. They rarely did. The experts failed, and their failure to forecast the future is a symptom of their failure to understand fully the complexities of the present.”

In highly complex circumstances, there is a limit to the value of expertise. A certain amount quickly improves one’s success rate when predicting the future. However the rate of improvement soon begins to plateau. Before long, big increases in expertise can yield small gains in the success of one’s predictions.

For wildly complex situations, the most successful predictors incorporate ideas from different disciplines, pursue multiple approaches at the same time, are willing to acknowledge mistakes and rely more on observation than theory.

Hey. I’m Alex Murrell. I'm a Planner at Epoch Design in Bristol where I help deliver highly creative, innovative and effective pack, instore and online communications for some of the world’s biggest FMCG brands. Want to know more? You can find me on Twitter or LinkedIn.

Trackbacks for this post

  1. Laplace’s Demon | Alex Murrell
  2. The problem of induction | Alex Murrell

Leave your comment