Another strike against prediction and following formal rules

I was talking in the last post about how inappropriate and outdated some of the economics discussion about monetary policy has become, especially the whole debate about policy rules, credibility and commitment.  Nothing has been learned from the great crisis, and the field is mostly a dusty backwater twenty or thirty years behind most of the rest of the economy,

Idealized planning, prediction and forecasting had its heyday in the 1960s and 1970s in Western business (and before that the Soviet Union put its faith in planning.) It turned out to be a disaster. Formal forecasting didn’t work, and most such economists were shown the door in the private sector in the 1990s. But the approach is still going strong in economic policy, as if the clock stopped twenty years ago.

Here’s a contrast. Take the announcement that Amazon is buying Whole Foods yesterday. Amazon is, of course, wildly successful as a business. But Jeff Bezos does not try to forecast or predict. Instead, according to Farhad Manjoo in today’s NYT:

Yet if there’s one thing I’ve learned about Jeff Bezos, Amazon’s founder and chief executive, after years of watching Amazon, it’s that he doesn’t spend a lot of time predicting future possibilities. He is instead consumed with improving the present reality on the ground, especially as seen through a customer’s eyes. The other thing to know about Mr. Bezos is that he is a committed experimentalist. His main way of deciding what Amazon should do next is to try stuff out, see what works, and do more of that.

If you can’t reliably predict, then you have to think and act very differently.

2017-06-17T14:59:15+00:00 June 17, 2017|Adaptation, Central Banks, Economics, Federal Reserve, Forecasting|

The right kind of excitement about AI

We are currently at peak excitement about Artificial Intelligence (AI), machine learning and data science. Can the new techniques fix the rotted state of Economics, with its baroque models that failed during the financial crisis? Some people suggest that the economics profession needs to step aside in favor of those are actually have real math skills.  Maybe Artificial Intelligence can finally fix economic policy, and we can replace the Federal Reserve (and most of financial markets) with a black box or three?


It’s not that easy, unfortunately. There is a hype cycle to these things. Hopes soar (as expectations for AI did as far back as the late 1960s.) Inflated early expectations get dashed. There is a trough of disillusionment, and people dismiss the whole thing as a fad which entranced the naive and foolish.  But eventually the long term results are substantive and real – and not always what you expected.


The developments in data science are very important, but it’s just as important to recognize the limits and the lags involved. The field will be damaged if people overhype it too much, or expect too much too soon.  That has often happened with artificial intelligence (see Nils Nilsson’s history of the field, The Quest for Artificial Intelligence.)

People usually get overconfident about new techniques. As one Stanford expert put it in the MIT Technology Review a few weeks ago,

For the most part, the AI achievements touted in the media aren’t evidence of great improvements in the field. The AI program from Google that won a Go contest last year was not a refined version of the one from IBM that beat the world’s chess champion in 1997; the car feature that beeps when you stray out of your lane works quite differently than the one that plans your route. Instead, the accomplishments so breathlessly reported are often cobbled together from a grab bag of disparate tools and techniques. It might be easy to mistake the drumbeat of stories about machines besting us at tasks as evidence that these tools are growing ever smarter—but that’s not happening.


I think if there’s better AI models in economics it won’t be machine learning – like pulling a kernel trick on a support vector machine – so much as a side-effect of new kinds of data becoming cheap and easy to collect. It will be more granular instantaneous data availability, not applying hidden Markov models or acyclic graphs. Nonlinear dynamics is still too hard, such models are inherently unpredictable, and in the case of the economy has a tendency to change if observed (Goodhart’s law with  a miaow from Heisenberg’s cat).


None of that new data will help, either,  if people don’t notice the data because they are overcommitted to their existing views. Data notoriously does not help change people’s minds as much as statisticians think.


AI has made huge strides on regular repetitive tasks, with millions or billions of repeatable events. I think it’s fascinating stuff. But general AI is as far away as ever. The field has a cycle of getting damaged when hopes run ahead of reality in the near term, which then produces an AI winter. Understanding a multi trillion dollar continental economy is much much harder than machine translation or driving.


There’s also a much deeper reason why machine learning isn’t a panacea for understanding the economy.  A modern economy is a complex dynamic system, and such systems are not predictable, even in principle.  Used correctly, machine learning can help you adapt, or change your model. But it is perhaps much more likely to be misused because of overenthusiastic belief it is a failsafe answer. Silicon Valley may be spending billions on machine learning research, with great success in some fields like machine translation, But there’s far less effort to spot gaps and missing assumptions and lack of error control loops – and that’s what interests me.
2017-05-31T15:12:24+00:00 May 31, 2017|AI and Computing, Forecasting, Quants and Models, Systems and Complexity|

US election shock: You’ll forget the models were wrong within a few weeks.

If there's one thing I've consistently argued on this blog, it's that predictions are usually a waste of time and money. Instead, test your assumptions. Don't just “make assumptions explicit.” Look for how you might be wrong, because then you can do something about it.

So how did that play out, the morning after the US Presidential election? Leave aside your horror or elation. This isn't a partisan point. No matter what your politics or feelings about the result, there's a pattern of bad decisions and misjudgment here. And everyone will also forget that pattern within weeks.


With hours to go before Americans vote, Democrat Hillary Clinton has about a 90 percent chance of defeating Republican Donald Trump in the race for the White House, according to the final Reuters/Ipsos States of the Nation project.

The Huffington Post put Clinton's chances at 98%. (98%!)

The HuffPost presidential forecast model gives Democrat Hillary Clinton a 98.2 percent chance of winning the presidency. Republican Donald Trump has essentially no path to an Electoral College victory.

Huffpo also rather sneeringly attacked Nate Silver's 538 for estimating Clinton's chances at a mere 65%.

While I love following the prediction markets for this year’s election, the most popular and widely quoted website out there, fivethirtyeight.com, has something tragically wrong with its presidential prediction model. With the same information, 538 is currently predicting a 65 percent chance of a Clinton victory

As for The NY Times, their final prediction was

“Hillary Clinton has an 85% chance to win”

It's easy to criticize in hindsight. But why do people keep doing this? Why do naive people keep believing this kind of faux-technocratic nonsense? It just leads people to damaging self-delusion, not just in politics but in business and markets.

Elaborate models and data are no defense against wishful thinking. “Big data” does not protect you against many kinds of error. Monte Carlo simulations can be foolish. How could people possibly put a 98% chance on an election that was close to the margin of error in the polls, especially after the lessons of the shock results of Brexit, the Greek referendum and many others?

But they did. Financial markets were bamboozled, for example. Again.

Reuters: Wall Street Elite stunned by Trump triumph.

We need a better way to do this. Instead of models, you need an antimodel, which is what I am developing.

2017-05-11T17:32:35+00:00 November 9, 2016|Assumptions, Confirmation bias, Forecasting, Politics|

Forecasting your way to Foolishness

I was  arguing in the last post that forecasts are much less useful for monetary policy  than people think. This is of course anathema and unthinkable to many people. The most fashionable current monetary framework, inflation targeting (or potential variants like nominal GDP targeting) are entirely reliant on forecasting the economy 1-2 years ahead. Hundreds of people are employed in central banks to do such projections. The process has the surface appearance of rigor and seriousness and technical knowledge. Monetary policy only has an impact with a lag, and those lags are famously long and variable. So, the argument goes, use of forecasts is essential.

This is almost universally accepted, but dead wrong. People overemphasize the relatively consistent lag and underemphasize the “variable” element. It is not just that economic forecasts of the future are notoriously inaccurate and unreliable. Our understanding of the transmission process from policy instruments to the real economy is also alarmingly vague, as the debates over the impact of QE showed.

That is an argument for caution, rather than technocratic overconfidence that we can predict inflation or GDP to a decimal point or two two years out. A less overconfident central bank is less likely to make serious policy errors. The development of precise models and projections tends to make people highly overconfident, however.

Standard academic thinking about monetary policy, with its targets and  policy rules,  is in fact a generation behind the rest of society. Most of business abandoned formal, rigorous planning methods based on forecasts and targets in the 1980s and 1990s, as Henry Mintzberg showed.  Corporations fired most of their economic forecasters and planners. Such formal methods had turned out to be mostly disastrous in practice. It made it more likely that people would ignore crucial new data, not less.

In fact, smarter central bankers tend to acknowledge the limits of projections. As they see it, the real value of projections is a matter of imposing consistency on the central banks outlook, rather than being able to confidently predict the future. It is a way of adding up the current data from different sectors of the economy to produce a unified picture.

But that could be done by simply using outside commercial forecasts, or international forecasts by bodies like the IMF or OECD. Central bank forecasts often perform very slightly better than individual outside forecasts, but hardly commensurate with the staff  resources and attention devoted to them. Averaging different forecasts is usually more accurate than any single forecast in any case.

Central banks shouldn’t be banned from looking at outside forecasts. They should just be forced to pay much less attention to forecasting and projections in general.

In any case, consistency is overrated in practice. Setting interest rates is not like proving a mathematical theorem.  Imposing consistency is often a way to ignore trade-offs or puzzles or genuine disagreement.

Forecasts are often  more of a distraction than an aid. Central banks actually tend to make decisions in a very different way in practice, as Lindblom argued decades ago.  They mostly make successive limited comparisons, because in practice it is too hard and unreliable to do anything else. No central bank makes decisions in an automatic way based on the forecast or a policy rule alone. They get into trouble when they rely on their consistent models too much, and think too little about the flaws or unexpected developments. In other words, using elaborate forecasts is a sign of ineptitude, not practical skill.

That also means markets misunderstand practical central bank policy when they think the models are as important as the staff economists who produce claim, or trust the official accounts of all the meetings that go into the forecast round.  As often happens, the way things happen is often different to the version in the official description or organization chart, and often even different to what people tell themselves they are doing.

If you can’t reliably predict, you need ways to control your exposure and adapt. “First, do no harm” is the best rule for monetary policy, not elaborate technical theater.


2016-09-28T12:41:58+00:00 September 28, 2016|Central Banks, Economics, Federal Reserve, Forecasting, Monetary Policy|

Let’s ban forecasts in central banks

People should learn from their mistakes, or so we usually all agree. Yet that mostly doesn’t happen. Instead, we get disturbing “serenity” and denial, and we had a prime example of it this week. So it is crucial we develop ways to make learning from mistakes more likely. I’d ban forecasts altogether in central banks if it would make officials pay more attention to what surprises them.

The most powerful institutions in the world economy can’t predict very well. But at least they could learn to adjust to the unexpected.

The Governor of the Bank of England, Mark Carney, testified before Parliament this week to skeptical MPs. The Bank, along with the IMF, Treasury, and other economists, predicted near-disaster if the UK voted for Brexit. So far, however, the UK economy is surprising everyone with its resilience.

So did Carney make a mistake? According to the Telegraph,

If Brexiteers on the Commons Treasury Committee were hoping for some kind of repentance, or at least a show of humility, they were to be sorely disappointed. Mr Carney was having none of it. At no stage had the Bank overstepped the mark or issued unduly alarmist warnings about the consequences of leaving, he insisted. He was “absolutely serene” about it all.

This is manifestly false and it did not go down well, at least with that particular opinion writer.

Arrogant denial is, I suppose, part of the central banker’s stock in trade. If a central bank admits to mistakes, then its authority and mystique is diminished accordingly.

I usually have a lot of regard for Carney, and worked at the Bank of England in the 1990s. But this response makes no sense. Central banking likes to think of itself as a technical trade, with dynamic stochastic general equilibrium models and optimum control theories. Yet the core of it has increasingly come down to judging subjective qualities like credibility, confidence, and expectations.

Economic techniques are really no use at all for this.  Credibility is not a technical matter of commitment, time consistency and determination, as economists often think since Kydland & Prescott. It is much more a matter of whether people consider you are aware of the situation and can balance things appropriately, not bind yourself irrevocably to a preexisting strategy or deny mistakes.  It is as much a matter of character and honesty as persistence.

The most frequent question hedge funds used to ask me about the Fed or other central banks was “do they see x?”  What happens if you are surprised? Will you ignore or deny it and make a huge mistake?  Markets want to know that central banks are alert, not stuck in a rut.  They want to know if officials are actively testing their views, not pretending to be omniscient. People want to know that officials aren’t too wrapped up in a model or theory or hiding under their desks instead of engaging with the real world.

It might seem as if denial is a good idea, at least in the short term. But it is the single most durable and deadly mistake in policymaking over the centuries. The great historian Barbara Tuchman called it “wooden-headedness,” or persistence in error.

The Bank of England, like other monetary authorities, issues copious Inflation Reports and projections and assessments. But it’s what they don’t know, or where they are most likely to miss something, which is most important. Perhaps the British press is being too harsh on Carney. Yet central banks across the world have hardly distinguished themselves in the last decade.

We need far fewer predictions in public policy, and far more examination of existing policy and how to adjust it in response to feedback. Forget about intentions and forecasts. Tell us what you didn’t expect and didn’t see, and what you’re going to do about it as a result. Instead of feedforward, we need feedback policy, as Herbert Simon suggested about decision-making.  We need to adapt, not predict. That means admitting when things don’t turn out the way you expected.

2017-05-11T17:32:35+00:00 September 10, 2016|Adaptation, Central Banks, Communication, Decisions, Economics, Forecasting, Time inconsistency|

“Everyone was Wrong”

From the New Yorker to FiveThirtyEight, outlets across the spectrum failed to grasp the Trump phenomenon.” – Politico


It’s the morning after Super Tuesday, when Trump “overwhelmed his GOP rivals“.

The most comprehensive losers (after Rubio) were media pundits and columnists, with their decades of experience and supposed ability to spot trends developing. And political reporters, with their primary sources and conversations with campaigns in late night bars. And statisticians with models predicting politics. And anyone in business or markets or diplomacy or politics who was naive enough to believe confident predictions from any of  the experts.

Politico notes how the journalistic eminences at the New Yorker and the Atlantic got it wrong over the last year.

But so did the quantitative people.

Those two mandarins weren’t alone in dismissing Trump’s chances. Washington Post blogger Chris Cillizza wrote in July that “Donald Trump is not going to be the Republican presidential nominee in 2016.” And numbers guru Nate Silver told readers as recently as November to “stop freaking out” about Trump’s poll numbers.

Of course it’s all too easy to spot mistaken predictions after the fact. But the same pattern has been arising after just about every big event in recent years. People make overconfident predictions, based on expertise, or primary sources, or big data, and often wishful thinking about what they want to see happen. They project an insidery air of secret confidences or confessions from the campaigns. Or disinterested quantitative rigor.

Then  they mostly go horribly wrong. Maybe one or two through sheer luck get it right – and then get it even more wrong the next time. Predictions may work temporarily so long as nothing unexpected happens or nothing changes in any substantive way. But that means the forecasts turn out to be worthless just when you need them most.

The point? You remember the old quote (allegedly from Einstein) defining insanity: repeating the same thing over and over and expecting a different result.

Markets and business and political systems are too complex to predict. That means a different strategy is needed. But instead  there are immense pressures to keep doing the same things which don’t work in media, and markets, and business. Over and over and over again.

So recognize and understand the pressures. And get around them. Use them to your advantage. Don’t be naive.


2017-05-11T17:32:40+00:00 March 2, 2016|Adaptation, Expertise, Forecasting, Politics, Quants and Models|

“And no one saw it coming.” Again. And again.

Peggy Noonan, writing today about the state of US GOP primary race:

But really, what a year. Nobody, not the most sophisticated expert watching politics up close all his life, knew or knows what’s going to happen. Does it go to the convention? Do Mr. Trump’s roarers turn out? Does he change history?

And no one saw it coming.

But the press and tv  and political and economic research firms will drown you in speculation and commentary and confident predictions. That’s yet another reason to distrust them, as I keep arguing. Instead, look for leverage, and resilience.  Don’t get locked into a convenient narrative. It’s what you can do to change your own thinking and position that counts.

2015-12-18T12:38:11+00:00 December 18, 2015|Assumptions, Expertise, Forecasting, Politics|

“Nearly everything that was expected to happen in the 2016 presidential race hasn’t”

Another data point on the value of political and economic predictions: Fred Barnes in the Weekly Standard.

Nearly everything that was expected to happen in the 2016 presidential race hasn’t, and many things that weren’t expected have. The rise of Donald Trump—even that he would run—was not predicted. Nor was the fall of Scott Walker or the weakness of Jeb Bush’s candidacy. Polls have proved to be unreliable indicators of where the Republican and Democratic campaigns are headed. Hillary Clinton’s coronation as Democratic nominee, we were told, was a sure thing. Now she’s sliding toward underdog status.


2017-05-11T17:32:41+00:00 September 30, 2015|Forecasting, Politics|

Don’t put faith in predictions, unless you want humiliation and failure (Eg Corbyn)

Jeremy Corbyn just won the most sweeping party leadership victory in British political history, beating his nearest rival by a margin of 40%. Tom Clark, in the Guardian:

At the beginning let it be said that we commentators and media pundits deserve the first slice of humble pie. None of us saw this coming. Spending too much time, perhaps, talking to professional politicians and to each other, we missed the anger and the desperation for change which we now know was pulsating through the broader Labour community out in the country after May’s unexpected outright Conservative win.

They talked to all the obvious people. They spent hours with multiple primary sources. They failed completely to see the most decisive win ever coming.

Or take this: Andrew Marr, in the Spectator;

This is the Corbyn summer. From the perspective of a short holiday, my overwhelming feeling is one of despair at my own semi-trade — the political commentariat, the natterati, the salaried yacketting classes. Who among us, really, predicted that Jeremy Corbyn would romp ahead like this? Where were the post-election columns pointing out that David Cameron’s victory would lead to a resurgent quasi-Marxist left?

And that’s just the beginning: how many of the well-connected, sophisticated, numerate political writers expected Labour to be slaughtered in the general election? Not me, that’s for sure. Going further back, how many people in 1992 told us John Major was an election winner? That Parris, I vaguely recall, but anyone else?

What do we take from this? At least they have the integrity and honesty to admit they were wrong, which is surprisingly rare. People mostly make excuses or change the subject.

What it shows (again) Is the whole idea of prediction in complex fields like politics or markets is broken, as I keep arguing. The betting odds when the contest opened was 200:1 against Corbyn.

Of course, if you put money on him them you’d be rich, just as if you found El Dorado you’d be rich. But the record is no one can actually do that consistently. El Dorado, the city of gold waiting for conquest, is a wonderful idea: apart from the minor inconvenience it does not exist.

So you need a better way, an alternative to making overconfident predictions that blow up just when things are changing and you are most vulnerable. Instead of prediction, you need agility, and leverage, and resilience. And it’s overreliance on journalism and punditry and theory which stops you being agile.

People fall in love with their predictions, and it blinds them to what they can do to position themselves. They need to find blind spots, not prophecies.

2017-05-11T17:32:41+00:00 September 13, 2015|Current Events, Forecasting|

Gullible forecasters and Greece

So how did the predictions from polls and pundits and consultants do on Greece so far? I’ve been arguing that people shouldn’t put so much faith in prediction, but look for potential errors and blind spots. The key is to look for leverage and resilience, not get trapped into vain prophecy when you can be doing something to adapt to the situation instead.

But is that justified by the latest events?

Unsurprisingly, the latest predictions are yet another tale of epic illusion and incompetence, on the evidence which Nate Silver examines here. The polls were bad:

Coming on the heels of the U.K. general election, the Israeli general election, the Scottish referendum and the U.S. midterms, Sunday’s Greek referendum looks like the latest in a series of bad outcomes for pre-election polls across the globe. While the last few polls before the vote showed “Oxi” (“no”) ahead by just 3 to 4 percentage points, it in fact took 61 percent of the vote to 39 percent for “yes,” a margin of more than 22 percentage points. It was a landslide: “Yes” didn’t win a single parliamentary constituency.

The pundits and market conventional wisdom were even worse.

When I use the term “conventional wisdom” in this article, I mostly mean the opinions of political pundits and journalists. In the case of Greece, however, the failure also extended to betting and financial markets. One bookmaker, Paddy Power, was so convinced that “yes” would win that it pre-emptively paid out “yes” bets. Most banks and financial institutions expected a “yes” vote. Betting markets like Betfair continued to show “yes” favored even after the polls had turned back toward “no.”

Markets are full of people who want stupid overconfident predictions, and even more people who will provide them to the gullible. There were plenty of instant experts on Greek politics. Plenty of journalists we’d convinced they knew what would happen. Did you believe them?

The answer? Not to be gullible. Instead of journalism and forecasting, which don’t work, look for ways to be resilient and adaptable. There’s all too many people who will believe the current conventional wisdom.

If you can avoid being naive, that’s 90% of the battle.


2017-05-11T17:32:41+00:00 July 9, 2015|Assumptions, Crisis Management, Current Events, Europe, Forecasting|