Forecasting

/Forecasting

Forecasting

US election shock: You’ll forget the models were wrong within a few weeks.

If there's one thing I've consistently argued on this blog, it's that predictions are usually a waste of time and money. Instead, test your assumptions. Don't just “make assumptions explicit.” Look for how you might be wrong, because then you can do something about it.

So how did that play out, the morning after the US Presidential election? Leave aside your horror or elation. This isn't a partisan point. No matter what your politics or feelings about the result, there's a pattern of bad decisions and misjudgment here. And everyone will also forget that pattern within weeks.

Reuters:

With hours to go before Americans vote, Democrat Hillary Clinton has about a 90 percent chance of defeating Republican Donald Trump in the race for the White House, according to the final Reuters/Ipsos States of the Nation project.

The Huffington Post put Clinton's chances at 98%. (98%!)

The HuffPost presidential forecast model gives Democrat Hillary Clinton a 98.2 percent chance of winning the presidency. Republican Donald Trump has essentially no path to an Electoral College victory.

Huffpo also rather sneeringly attacked Nate Silver's 538 for estimating Clinton's chances at a mere 65%.

While I love following the prediction markets for this year’s election, the most popular and widely quoted website out there, fivethirtyeight.com, has something tragically wrong with its presidential prediction model. With the same information, 538 is currently predicting a 65 percent chance of a Clinton victory

As for The NY Times, their final prediction was

“Hillary Clinton has an 85% chance to win”

It's easy to criticize in hindsight. But why do people keep doing this? Why do naive people keep believing this kind of faux-technocratic nonsense? It just leads people to damaging self-delusion, not just in politics but in business and markets.

Elaborate models and data are no defense against wishful thinking. “Big data” does not protect you against many kinds of error. Monte Carlo simulations can be foolish. How could people possibly put a 98% chance on an election that was close to the margin of error in the polls, especially after the lessons of the shock results of Brexit, the Greek referendum and many others?

But they did. Financial markets were bamboozled, for example. Again.

Reuters: Wall Street Elite stunned by Trump triumph.

We need a better way to do this. Instead of models, you need an antimodel, which is what I am developing.

By | November 9, 2016|Assumptions, Confirmation bias, Forecasting, Politics|0 Comments

Forecasting your way to Foolishness

I was  arguing in the last post that forecasts are much less useful for monetary policy  than people think. This is of course anathema and unthinkable to many people. The most fashionable current monetary framework, inflation targeting (or potential variants like nominal GDP targeting) are entirely reliant on forecasting the economy 1-2 years ahead. Hundreds of people are employed in central banks to do such projections. The process has the surface appearance of rigor and seriousness and technical knowledge. Monetary policy only has an impact with a lag, and those lags are famously long and variable. So, the argument goes, use of forecasts is essential.

This is almost universally accepted, but dead wrong. People overemphasize the relatively consistent lag and underemphasize the “variable” element. It is not just that economic forecasts of the future are notoriously inaccurate and unreliable. Our understanding of the transmission process from policy instruments to the real economy is also alarmingly vague, as the debates over the impact of QE showed.

That is an argument for caution, rather than technocratic overconfidence that we can predict inflation or GDP to a decimal point or two two years out. A less overconfident central bank is less likely to make serious policy errors. The development of precise models and projections tends to make people highly overconfident, however.

Standard academic thinking about monetary policy, with its targets and  policy rules,  is in fact a generation behind the rest of society. Most of business abandoned formal, rigorous planning methods based on forecasts and targets in the 1980s and 1990s, as Henry Mintzberg showed.  Corporations fired most of their economic forecasters and planners. Such formal methods had turned out to be mostly disastrous in practice. It made it more likely that people would ignore crucial new data, not less.

In fact, smarter central bankers tend to acknowledge the limits of projections. As they see it, the real value of projections is a matter of imposing consistency on the central banks outlook, rather than being able to confidently predict the future. It is a way of adding up the current data from different sectors of the economy to produce a unified picture.

But that could be done by simply using outside commercial forecasts, or international forecasts by bodies like the IMF or OECD. Central bank forecasts often perform very slightly better than individual outside forecasts, but hardly commensurate with the staff  resources and attention devoted to them. Averaging different forecasts is usually more accurate than any single forecast in any case.

Central banks shouldn’t be banned from looking at outside forecasts. They should just be forced to pay much less attention to forecasting and projections in general.

In any case, consistency is overrated in practice. Setting interest rates is not like proving a mathematical theorem.  Imposing consistency is often a way to ignore trade-offs or puzzles or genuine disagreement.

Forecasts are often  more of a distraction than an aid. Central banks actually tend to make decisions in a very different way in practice, as Lindblom argued decades ago.  They mostly make successive limited comparisons, because in practice it is too hard and unreliable to do anything else. No central bank makes decisions in an automatic way based on the forecast or a policy rule alone. They get into trouble when they rely on their consistent models too much, and think too little about the flaws or unexpected developments. In other words, using elaborate forecasts is a sign of ineptitude, not practical skill.

That also means markets misunderstand practical central bank policy when they think the models are as important as the staff economists who produce claim, or trust the official accounts of all the meetings that go into the forecast round.  As often happens, the way things happen is often different to the version in the official description or organization chart, and often even different to what people tell themselves they are doing.

If you can’t reliably predict, you need ways to control your exposure and adapt. “First, do no harm” is the best rule for monetary policy, not elaborate technical theater.

 

By | September 28, 2016|Central Banks, Economics, Federal Reserve, Forecasting, Monetary Policy|Comments Off on Forecasting your way to Foolishness

Let’s ban forecasts in central banks

People should learn from their mistakes, or so we usually all agree. Yet that mostly doesn’t happen. Instead, we get disturbing “serenity” and denial, and we had a prime example of it this week. So it is crucial we develop ways to make learning from mistakes more likely. I’d ban forecasts altogether in central banks if it would make officials pay more attention to what surprises them.

The most powerful institutions in the world economy can’t predict very well. But at least they could learn to adjust to the unexpected.

The Governor of the Bank of England, Mark Carney, testified before Parliament this week to skeptical MPs. The Bank, along with the IMF, Treasury, and other economists, predicted near-disaster if the UK voted for Brexit. So far, however, the UK economy is surprising everyone with its resilience.

So did Carney make a mistake? According to the Telegraph,

If Brexiteers on the Commons Treasury Committee were hoping for some kind of repentance, or at least a show of humility, they were to be sorely disappointed. Mr Carney was having none of it. At no stage had the Bank overstepped the mark or issued unduly alarmist warnings about the consequences of leaving, he insisted. He was “absolutely serene” about it all.

This is manifestly false and it did not go down well, at least with that particular opinion writer.

Arrogant denial is, I suppose, part of the central banker’s stock in trade. If a central bank admits to mistakes, then its authority and mystique is diminished accordingly.

I usually have a lot of regard for Carney, and worked at the Bank of England in the 1990s. But this response makes no sense. Central banking likes to think of itself as a technical trade, with dynamic stochastic general equilibrium models and optimum control theories. Yet the core of it has increasingly come down to judging subjective qualities like credibility, confidence, and expectations.

Economic techniques are really no use at all for this.  Credibility is not a technical matter of commitment, time consistency and determination, as economists often think since Kydland & Prescott. It is much more a matter of whether people consider you are aware of the situation and can balance things appropriately, not bind yourself irrevocably to a preexisting strategy or deny mistakes.  It is as much a matter of character and honesty as persistence.

The most frequent question hedge funds used to ask me about the Fed or other central banks was “do they see x?”  What happens if you are surprised? Will you ignore or deny it and make a huge mistake?  Markets want to know that central banks are alert, not stuck in a rut.  They want to know if officials are actively testing their views, not pretending to be omniscient. People want to know that officials aren’t too wrapped up in a model or theory or hiding under their desks instead of engaging with the real world.

It might seem as if denial is a good idea, at least in the short term. But it is the single most durable and deadly mistake in policymaking over the centuries. The great historian Barbara Tuchman called it “wooden-headedness,” or persistence in error.

The Bank of England, like other monetary authorities, issues copious Inflation Reports and projections and assessments. But it’s what they don’t know, or where they are most likely to miss something, which is most important. Perhaps the British press is being too harsh on Carney. Yet central banks across the world have hardly distinguished themselves in the last decade.

We need far fewer predictions in public policy, and far more examination of existing policy and how to adjust it in response to feedback. Forget about intentions and forecasts. Tell us what you didn’t expect and didn’t see, and what you’re going to do about it as a result. Instead of feedforward, we need feedback policy, as Herbert Simon suggested about decision-making.  We need to adapt, not predict. That means admitting when things don’t turn out the way you expected.

By | September 10, 2016|Adaptation, Central Banks, Communication, Decisions, Economics, Forecasting, Time inconsistency|Comments Off on Let’s ban forecasts in central banks

“Everyone was Wrong”

From the New Yorker to FiveThirtyEight, outlets across the spectrum failed to grasp the Trump phenomenon.” – Politico

 

It’s the morning after Super Tuesday, when Trump “overwhelmed his GOP rivals“.

The most comprehensive losers (after Rubio) were media pundits and columnists, with their decades of experience and supposed ability to spot trends developing. And political reporters, with their primary sources and conversations with campaigns in late night bars. And statisticians with models predicting politics. And anyone in business or markets or diplomacy or politics who was naive enough to believe confident predictions from any of  the experts.

Politico notes how the journalistic eminences at the New Yorker and the Atlantic got it wrong over the last year.

But so did the quantitative people.

Those two mandarins weren’t alone in dismissing Trump’s chances. Washington Post blogger Chris Cillizza wrote in July that “Donald Trump is not going to be the Republican presidential nominee in 2016.” And numbers guru Nate Silver told readers as recently as November to “stop freaking out” about Trump’s poll numbers.

Of course it’s all too easy to spot mistaken predictions after the fact. But the same pattern has been arising after just about every big event in recent years. People make overconfident predictions, based on expertise, or primary sources, or big data, and often wishful thinking about what they want to see happen. They project an insidery air of secret confidences or confessions from the campaigns. Or disinterested quantitative rigor.

Then  they mostly go horribly wrong. Maybe one or two through sheer luck get it right – and then get it even more wrong the next time. Predictions may work temporarily so long as nothing unexpected happens or nothing changes in any substantive way. But that means the forecasts turn out to be worthless just when you need them most.

The point? You remember the old quote (allegedly from Einstein) defining insanity: repeating the same thing over and over and expecting a different result.

Markets and business and political systems are too complex to predict. That means a different strategy is needed. But instead  there are immense pressures to keep doing the same things which don’t work in media, and markets, and business. Over and over and over again.

So recognize and understand the pressures. And get around them. Use them to your advantage. Don’t be naive.

 

By | March 2, 2016|Adaptation, Expertise, Forecasting, Politics, Quants and Models|Comments Off on “Everyone was Wrong”

“And no one saw it coming.” Again. And again.

Peggy Noonan, writing today about the state of US GOP primary race:

But really, what a year. Nobody, not the most sophisticated expert watching politics up close all his life, knew or knows what’s going to happen. Does it go to the convention? Do Mr. Trump’s roarers turn out? Does he change history?

And no one saw it coming.

But the press and tv  and political and economic research firms will drown you in speculation and commentary and confident predictions. That’s yet another reason to distrust them, as I keep arguing. Instead, look for leverage, and resilience.  Don’t get locked into a convenient narrative. It’s what you can do to change your own thinking and position that counts.

By | December 18, 2015|Assumptions, Expertise, Forecasting, Politics|Comments Off on “And no one saw it coming.” Again. And again.

“Nearly everything that was expected to happen in the 2016 presidential race hasn’t”

Another data point on the value of political and economic predictions: Fred Barnes in the Weekly Standard.

Nearly everything that was expected to happen in the 2016 presidential race hasn’t, and many things that weren’t expected have. The rise of Donald Trump—even that he would run—was not predicted. Nor was the fall of Scott Walker or the weakness of Jeb Bush’s candidacy. Polls have proved to be unreliable indicators of where the Republican and Democratic campaigns are headed. Hillary Clinton’s coronation as Democratic nominee, we were told, was a sure thing. Now she’s sliding toward underdog status.

 

By | September 30, 2015|Forecasting, Politics|0 Comments

Don’t put faith in predictions, unless you want humiliation and failure (Eg Corbyn)

Jeremy Corbyn just won the most sweeping party leadership victory in British political history, beating his nearest rival by a margin of 40%. Tom Clark, in the Guardian:

At the beginning let it be said that we commentators and media pundits deserve the first slice of humble pie. None of us saw this coming. Spending too much time, perhaps, talking to professional politicians and to each other, we missed the anger and the desperation for change which we now know was pulsating through the broader Labour community out in the country after May’s unexpected outright Conservative win.

They talked to all the obvious people. They spent hours with multiple primary sources. They failed completely to see the most decisive win ever coming.

Or take this: Andrew Marr, in the Spectator;

This is the Corbyn summer. From the perspective of a short holiday, my overwhelming feeling is one of despair at my own semi-trade — the political commentariat, the natterati, the salaried yacketting classes. Who among us, really, predicted that Jeremy Corbyn would romp ahead like this? Where were the post-election columns pointing out that David Cameron’s victory would lead to a resurgent quasi-Marxist left?

And that’s just the beginning: how many of the well-connected, sophisticated, numerate political writers expected Labour to be slaughtered in the general election? Not me, that’s for sure. Going further back, how many people in 1992 told us John Major was an election winner? That Parris, I vaguely recall, but anyone else?

What do we take from this? At least they have the integrity and honesty to admit they were wrong, which is surprisingly rare. People mostly make excuses or change the subject.

What it shows (again) Is the whole idea of prediction in complex fields like politics or markets is broken, as I keep arguing. The betting odds when the contest opened was 200:1 against Corbyn.

Of course, if you put money on him them you’d be rich, just as if you found El Dorado you’d be rich. But the record is no one can actually do that consistently. El Dorado, the city of gold waiting for conquest, is a wonderful idea: apart from the minor inconvenience it does not exist.

So you need a better way, an alternative to making overconfident predictions that blow up just when things are changing and you are most vulnerable. Instead of prediction, you need agility, and leverage, and resilience. And it’s overreliance on journalism and punditry and theory which stops you being agile.

People fall in love with their predictions, and it blinds them to what they can do to position themselves. They need to find blind spots, not prophecies.

By | September 13, 2015|Current Events, Forecasting|0 Comments

Gullible forecasters and Greece

So how did the predictions from polls and pundits and consultants do on Greece so far? I’ve been arguing that people shouldn’t put so much faith in prediction, but look for potential errors and blind spots. The key is to look for leverage and resilience, not get trapped into vain prophecy when you can be doing something to adapt to the situation instead.

But is that justified by the latest events?

Unsurprisingly, the latest predictions are yet another tale of epic illusion and incompetence, on the evidence which Nate Silver examines here. The polls were bad:

Coming on the heels of the U.K. general election, the Israeli general election, the Scottish referendum and the U.S. midterms, Sunday’s Greek referendum looks like the latest in a series of bad outcomes for pre-election polls across the globe. While the last few polls before the vote showed “Oxi” (“no”) ahead by just 3 to 4 percentage points, it in fact took 61 percent of the vote to 39 percent for “yes,” a margin of more than 22 percentage points. It was a landslide: “Yes” didn’t win a single parliamentary constituency.

The pundits and market conventional wisdom were even worse.

When I use the term “conventional wisdom” in this article, I mostly mean the opinions of political pundits and journalists. In the case of Greece, however, the failure also extended to betting and financial markets. One bookmaker, Paddy Power, was so convinced that “yes” would win that it pre-emptively paid out “yes” bets. Most banks and financial institutions expected a “yes” vote. Betting markets like Betfair continued to show “yes” favored even after the polls had turned back toward “no.”

Markets are full of people who want stupid overconfident predictions, and even more people who will provide them to the gullible. There were plenty of instant experts on Greek politics. Plenty of journalists we’d convinced they knew what would happen. Did you believe them?

The answer? Not to be gullible. Instead of journalism and forecasting, which don’t work, look for ways to be resilient and adaptable. There’s all too many people who will believe the current conventional wisdom.

If you can avoid being naive, that’s 90% of the battle.