The forecasting fallacy

Introduction

Marketers are prone to a prediction.

You’ll find them in the annual tirade of trend decks. In the PowerPoint projections of self-proclaimed prophets. In the feeds of forecasters and futurists. They crop up on every conference stage. They make their mark on every marketing magazine. And they work their way into every white paper.

To understand the extent of our forecasting fascination, I analysed the websites of three management consultancies looking for predictions with time frames ranging from 2025 to 2050. Whilst one prediction may be published multiple times, the size of the numbers still shocked me. Deloitte’s site makes 6904 predictions. McKinsey & Company make 4296. And Boston Consulting Group, 3679.

In total, these three companies’ websites include just shy of 15,000 predictions stretching out over the next 30 years.

But it doesn’t stop there.

My analysis finished in the year 2050 not because the predictions came to an end but because my enthusiasm did.

Search the sites and you’ll find forecasts stretching all the way to the year 2100. We’re still finding our feet in this century but some, it seems, already understand the next.

I believe the vast majority of these to be not forecasts but fantasies. Snake oil dressed up as science. Fiction masquerading as fact.

This article assesses how predictions have performed in five fields. It argues that poor projections have propagated throughout our society and proliferated throughout our industry. It argues that our fixation with forecasts is fundamentally flawed.

So instead of focussing on the future, let’s take a moment to look at the predictions of the past. Let’s see how our projections panned out.

We can’t predict recessions

The Economist’s “The World in 2020”, published in late 2019, brings together experts from business, politics and science to fill 150 pages with projections for the year ahead.

Editor Daniel Franklin summarised the issue’s predictions on 2020’s economic outlook: 

“Banks, especially in Europe, will battle with negative interest rates. America will flirt with recession—but don’t be surprised if disaster fails to strike, and markets revive.”

Just over two months later COVID-19 struck, the world went into lockdown and we fell into one of the largest recessions on record.

Perhaps this critique is unfair. The Economist wasn’t to know that we were on the precipice of a pandemic. So let’s review our success rate during more stable times.

Over to the Financial Times:

 “In the 2001 issue of the International Journal of Forecasting, an economist from the International Monetary Fund, Prakash Loungani, published a survey of the accuracy of economic forecasts throughout the 1990s. He reached two conclusions. The first was that forecasts are all much the same. There was little to choose between those produced by the IMF and the World Bank, and those from private sector forecasters. The second conclusion was that the predictive record of economists was terrible. Loungani wrote: “The record of failure to predict recessions is virtually unblemished.””

It’s hard to overstate the severity of Loungani’s findings. His analysis revealed that economists had failed to predict 148 of the past 150 recessions. To put it another way, the experts only saw 1.33% of recessions coming.

Others have pushed their analysis even further.

Andrew Brigden, Chief Economist at Fathom Consulting, analysed the International Monetary Fund’s predictions across 30 years and 194 countries. The research found that only 4 of the 469 downturns had been predicted by the spring of the preceding year. Brigden’s success rate of 0.85% is remarkably consistent with Longani’s. Brigden writes:

“Since 1988, the IMF has never forecast a developed economy recession with a lead of anything more than a few months.”

These two studies, and countless others, paint a pretty damning picture of our ability to spot recessions on the horizon. 

It’s clear that our track record of predicting recessions is pretty patchy. But that doesn’t stop us from making more. As a slowdown turns into a downturn, economists rush to reassure by predicting when more stable times will return. But how do they fare? 

That’s the field that we’ll focus on next.

We can’t predict GDP

On 15 September 2008 Lehman Brothers filed for bankruptcy.

Despite being the largest bankruptcy filing in U.S. history, the government refused to bail out the bank. Global financial stress quickly turned into an international emergency.

From its New York epicentre, the effects rippled around the world. International trade fell off a cliff. So did industrial production. Unemployment soared and consumer confidence collapsed.

7 months later, on 22 April 2009, the IMF published its World Economic Outlook:

“Even with determined steps to return the financial sector to health and continued use of macroeconomic policy levers to support aggregate demand, global activity is projected to contract by 1.3% in 2009. (…) Growth is projected to reemerge in 2010, but at 1.9% it would be sluggish relative to past recoveries.”

These figures did not fare well.

Global GDP did contract in 2009 but by 0.7%, around half as severe as the forecast. In 2010, growth wasn’t sluggish but soaring. The global economy grew by a whopping 5.1%, two and a half times greater than the 1.9% predicted. 

In an analysis of the IMF predictions by The Brookings Institute, the critique went even further:

“(The IMF) got the numbers for China and India wrong. The numbers for 2010 were way off-target: The U.S. economy ended up growing by 3% instead of the forecasted zero, Germany’s economy by 3.5% instead of shrinking by one and Japan by 4% instead of -0.5%.”

But it isn’t just the IMF. Take The World Bank.

On 1 January 2010, The World Bank published their Global Economic Prospects report. With 9 months longer than the IMF, you’d expect their GDP predictions to be much more accurate. But they still missed the mark.

They predicted global GDP to grow 2.7% but in reality it increased 3.8%. 1.1% out. In China and India, they were 1.3% out. And in Japan they were 2.7% wide of the mark.

Clearly our GDP predictions are imprecise and imperfect. But that doesn’t stop us from making more. As society starts to stabilise, economists turn their attention to predicting more universal measures. But how do they fare? 

That’s the field that we’ll focus on next.

We can’t predict interest rates

On 14 July 2015, two economics professors, Maurice Obstfeld and Linda Tesar, published an article on the White House website espousing the importance of interest rates:

“The level of long-term interest rates is of central importance in the macroeconomy. It matters to borrowers looking to start a business or buy a home; lenders evaluating the risk and rewards of extending credit; savers preparing for college or retirement; and policymakers crafting the government’s budget.”

With interest rates being so important to so many, it’s no surprise that an entire industry of professional predictors exists to monitor the rate’s past and forecast its future.

The Wall Street Journal surveyed a panel of 50 such specialists and asked them to predict the interest rate 8 months into the future.

From a starting interest rate of 3.2%, the professional predictions ranged from a high of 3.8% to a low of 2.5%. The average estimate was 3.4%.

In reality, nobody came close. 6 months in and the interest rate had fallen below the predictions’ lower bound. And it kept falling. By the end of the prediction timeframe the rate was closing in on 2%. None of the predictions had come within half a percent of reality. 

These may seem like fine margins, but half a percent represents about a sixth of the initial rate. That’s like having 50 estate agents estimating the value of a $1.2m property and nobody coming within $200,000.

And this isn’t a one off.

The Obstfeld and Tesar article presents the results of similar studies conducted in five different years.

In every single one, the forecasts fail. In 2006, the rate was predicted to be 6%, in reality it was closer to 5%. In 2010, it was predicted to be 6%, it was actually closer to 4%. In 2005, it was predicted to be 5%, it was closer to 2%.

The article concludes:

“The decline (in interest rates) has come largely as a surprise. Financial markets and professional forecasters alike consistently failed to predict the secular shift, focusing too much on cyclical factors.”

It seems that interest rate predictions are prone to flounder and fold. But that doesn’t stop us from making more. Despite our failures at forecasting one economy, some turn their attention to predicting the relationship between two. But how do they fare? 

That’s the field that we’ll focus on next.

We can’t predict exchange rates

If predicting the ups and downs of one economy is hard, forecasting the relationship between two is doubly difficult.

Fortunately, financial institutions make an assessment of their success straight forward.

At the start of each year, many banks make a prediction for the end of year dollar-to-euro exchange-rate. In one study, Gerd Gigerenzer, the director emeritus of the Center for Adaptive Behavior and Cognition at the Max Planck Institute for Human Development, compiled the exchange rate predictions made between 2000 and 2010 by 22 international banks including Barclays, Citigroup, JPMorgan Chase, and the Bank of America Merrill Lynch.

Discussing the Gigerenzer study in his book Range David Epstein provides some searing details into where the forecasts went wrong:

“In six of the ten years, the true exchange rate fell outside the entire range of all twenty-two bank forecasts. (…) Major bank forecasts missed every single change of [exchange rate] direction in the decade Gigerenzer analysed.”

Gigerenzer’s own conclusion was even more clear:

“Forecasts of dollar-to-euro exchange rates are worthless.”

30 years earlier, Richard Meese and Kenneth Rogoff, from the University of California, Berkeley and the Federal reserve respectively, pitted three different exchange rate prediction methods against a “random walk”.

For context, imagine a graph with the exchange rate on the y-axis and time on the x-axis. A rudimentary “random walk” would take the present rate and, at set intervals on the time axis, add or subtract a pre-defined percentage to the exchange rate axis with equal probability. The graph’s line would rise and fall at random, diverging from and converging back towards the original rate over time.

You’d imagine that all three predictive models would outperform the aimless fluctuations of the random walk. You’d think that at least one would surpass it significantly.

But you’d be wrong.

Meese and Rogoff’s paper came to a startling conclusion:

“The random walk model performs no worse than estimated univariate time series models, an unconstrained vector autoregression or our candidate structural models in forecasting three major bilateral rates (dollar/mark, dollar/pound, and dollar/yen).”

From Gigerenzer to Meese to Rogoff, it seems exchange rate predictions are bets that perform no better than blind chance. But that doesn’t stop us from making more. Marketers make it their job to understand the trends in their industry. But how do they fare? 

That’s the field that we’ll focus on next.

We can’t predict media spend

GroupM has some serious clout. They own three of the five biggest media agencies in the world and manage somewhere north of $50 billion in media spend each year. It’s fair to say they know their stuff.

However, when Samuel Scott analysed the group’s 2019 UK ad spend forecast, published in November 2018, he found some inaccuracies. In summarising GroupM’s predictions, Scott wrote:

“Total advertising spend will increase 4.8% to £20.9bn in 2019. Digital advertising spend will grow 8.6% to £12.8bn. TV advertising spend in the country will grow 1% to £4.36bn. OOH will grow 2.7% to £964m. Advertising in national print publications will fall 9.4% to £764m. Radio spots will grow 7% to £535m. Cinema advertising will grow 1% to £187m.”

With just a month before the end of the year, GroupM updated their figures.

By comparing the forecasts made before the year began with those made right before it ended, we can identify an approximate error rate for the initial predictions.

The total UK ad spend prediction was 3.0% out. The cinema figure missed the mark by a massive 8%. The digital forecast was 6.3% too low and the radio forecast was 6.8% too high. The projections for OOH and TV were 5% and 3.3% wide of the mark respectively. National Print was the most accurate prediction. But even then, it was 2.4% shy of reality.

Again, 2.4% can seem like a fine margin. But as Scott says:

 “Even one percentage point can mean a difference of millions of pounds.”

Scott gave GroupM’s predictions a score of zero out of seven.

But they weren’t the only ones.

In September 2018, MoffettNathanson Research published their 2019 US ad spend forecast. The company then published a set of revised figures in June 2019. So how did they get on?

The firm’s most wayward prediction was for magazine ad spend. They forecast a 7% decline when in reality it was more than double that at 15.9%. Their closest prediction was TV which was 0.4% out. On average, across seven predictions, the company’s forecasts were 2.2% off the mark. An improvement over GroupM, but still pretty poor.

Again, Scott gave a score of zero.

According to Scott’s analysis, it seems our industry’s media spend soothsaying is not particularly solid.

We can’t predict much at all

So there you have it. We can’t predict recessions, GDP, interest rates, exchange rates or media spend.

But it doesn’t end there.

I could have looked at the predictions of company earnings. In How to Predict the Unpredictable, William Poundstone cites a Federal Reserve Board study that found analysts’ average expectations for S&P 500 earnings were too high in nineteen out of twenty one years between 1979 and 1999. 

I could have looked at the predictions of TV pundits. In his book The Signal and the Noise, Nate Silver analysed nearly 1000 predictions made by panellists on the public affairs program The McLaughlin Group. Silver found 338 to be mostly or completely false and the exact same number, 338, to be mostly or completely true.

I could have looked at the predictions of stock traders. In Thinking Fast and Slow Daniel Kahneman analysed twenty-five wealth managers over an eight year period. Kahneman calculated the average correlation coefficient between each manager’s performance in any two years to be 0.01. In short, a manager’s ability to predict a stock’s future in one year had effectively zero correlation with the same manager’s ability to predict it in another year.

The truth is, in almost every field you look at, our predictive performance is pretty poor.

Still don’t believe me? Take a look at the research of political scientist Philip Tetlock.

In 1984, the National Research Council held a meeting to discuss American-Soviet relations. Philip Tetlock, the committee’s youngest member, was struck by how frequently his more experienced colleagues made perfectly contradictory policy predictions. In that moment, Tetlock decided to put expert forecasting to the test. 

The Atlantic provides a succinct summary:

“[Tetlock] collected forecasts from 284 highly educated experts who averaged more than 12 years of experience in their specialties. To ensure that the predictions were concrete, experts had to give specific probabilities of future events. Tetlock had to collect enough predictions that he could separate lucky and unlucky streaks from true skill. The project lasted 20 years, and comprised 82,361 probability estimates about the future.”

Tetlock collected his bank of quantifiable, time-limited forecasts from a panel of experts drawn from political science, economics, law, diplomacy, journalism and academia. Then he waited two decades to see whether their predictions came true.

Nate Silver describes Tetlock’s findings:

"Tetlock’s conclusion was damning. The experts in his survey, regardless of their occupation, experience or subfield, had done barely any better than random chance (…). They were grossly overconfident and terrible at calculating probabilities: about 15% of the events that they claimed had no chance of occurring in fact happened, while about 25% of those that they said were absolutely sure, things in fact failed to occur. It didn’t matter whether the experts were making predictions about economics, domestic politics, or international affairs; their judgement was equally bad across the board."

I could continue. But once again my enthusiasm has come to an end before the predictions have.

Conclusion

Viewed through the lens of Tetlock, it becomes clear that the 15,000 predictions with which I began this article are not forecasts but fantasies.

The projections look precise. They sound scientific. But these forecasts are nothing more than delusions with decimal places. Snake oil dressed up as statistics. Fiction masquerading as fact. They provide a feeling of certainty but they deliver anything but.

In his 1998 book The Fortune Sellers, the business writer William A. Sherden quantified our consensual hallucination: 

“Each year the prediction industry showers us with $200 billion in (mostly erroneous) information. The forecasting track records for all types of experts are universally poor, whether we consider scientifically oriented professionals, such as economists, demographers, meteorologists, and seismologists, or psychic and astrological forecasters whose names are household words.” 

The comparison between professional predictors and fortune tellers is apt.

From tarot cards to tea leaves, palmistry to pyromancy, clear visions of cloudy futures have always been sold to susceptible audiences. 

Today, marketers are one such audience.

It’s time we opened our eyes.

Let’s stop clamouring over the ten-year trend decks. Let’s stop counting on the constant conjecture of consultants. Let’s stop trying to guess the future and start trying to build it.

We do not travel through time on a predefined path. The future is not fixed but the result of the actions we take in the present. If you want to be successful over the next 10 years, start building a competitive advantage over the next 10 months. If you want to win tomorrow, start tilting the table today. If you believe things should change in the future, put pen to paper in the present.

The future is uncertain. You cannot predict it. But you can create it.

Or as Cindy Gallup likes to say:

“In order to predict the future, you have to invent it".

Notes

  • This article was featured in episode 455 of Neil Perkin’s Only Dead Fish newsletter. Thank you Neil!

  • This article was also featured in the 19th September 2020 edition of The Browser’s Top Of The Week newsletter.

  • This piece made the front page of Y Combinator’s Hacker News.

Previous
Previous

Adland is an island

Next
Next

The pitfalls of purpose