Alucidate Blog

/Alucidate Blog/

Political shocks: We need a new form of legitimacy, not just economic growth

For once, the results of the first round of the French election lined up with the polls, with Macron and Le Pen through to the final round.  As always, most of the media gets obsessed with the horse race aspects of such elections. But there is no disputing CNN’s conclusion:

The result upended traditional French politics: Neither candidate hails from the establishment parties that have dominated the country for decades.

The previous order is already overturned, in yet another election.

What is going on here? This latest election, and the Trump and Brexit results, is most often talked about in terms of nationalism (or populism) versus globalism, or “closed” versus “open.”   But it’s something much deeper than a dispute over jobs or  trade or refugee policy.  It’s not a dispute over technique, or efficiency, but about goals.

I think it’s better understood as a dispute over legitimacy: what the state is for.  Electorates feel ignored and betrayed, both on the left and the right. The defenders of the current status quo, are faltering.

Let’s take a longer-term perspective, instead of getting hung up on the latest headlines. The US Constitutional Scholar and former National Security Council Official Phillip Bobbit wrote a history of the relationship between strategy and legal norms over the past 500 years, The Shield of Achilles: War, Peace, and the Course of History, in 2002. The nature of the state kept changing, he said, sometimes in response to political upheaval, sometimes in response to military change such as universal conscription. He traces the development from princely states to monarchical states to ‘state-nations’ to ‘nation-states’ to the current ‘market-state.’

The trouble is each transition between forms of state legitimacy happened through an epochal war.  Conflict and legal norms are intertwined, he argues. Each of those disruptive wars was resolved with a peace settlement, such as the Treaty of Westphalia or Versailles, which reset the norms of legitimacy for the next period.

Yet all constitutions also carry within themselves the seeds of future conflict. The 1789 US Constitution was pregnant with the 1861 civil war because it contained, in addition to a bill of rights, provisions for slavery and provincial autonomy. Similarly the international constitution created at Westphalia in 1648, no less than those created at Vienna in 1815 or Utrecht in 1713, set the terms for the conflict to come even while it settled the conflict just ended. (p xxiv)

Our own current system is the market-state. Its legitimacy, he said, is based on maximizing the opportunity of its people. The market-state is good at setting up markets, of course.  But:

unaided by the assurance that the political process will not be subordinated by the most powerful market actors, markets can become targets of the alienated and of those who are disenfranchised by any shift away from national or ethnic institutions.

In other words, every settled idea of political norms tends to wear out after five to ten decades, as the settlement of the previous great war recedes into history and political realities and military and strategic necessities change.  But politics gets stuck and the result is often a massive conflict, an epochal war, which shakes the international system to its core.


Since the 2008 crisis, it is obvious to many that the market-state is not delivering on its fundamental promise: maximizing opportunity. At the same time, its universalist notions of human rights (often perhaps developed as a rebuke to the Soviets in the Cold War) is redrawing the fundamental nature of democracy itself through massive demographic change and a fraying welfare state.  No wonder we’re seeing increasing conflict.

I’m not a believer in deterministic cycles. However,  history can sensitize us to the fact that no set of institutions lasts forever, and so we need to adapt.

The answer to the current turmoil is not to go back to the previous system of nation-states that itself arose on the ruins of empire. But neither is it to grimly defend a set of norms that made sense in 1945, or in an amended way, in 1970.  Many of those norms are cherished principles. On the previous record, that is likely to produce another epochal war – and the seeds of such extremism already seem to be flourishing.

We need to go forward instead. That requires a lot more creativity.

By | April 24, 2017|Adaptation, Current Events, Europe|Comments Off on Political shocks: We need a new form of legitimacy, not just economic growth

How Economics dies as a discipline

This caught my eye the other week in a review of a new book in the Guardian.   Something seems to have gone very wrong with how people are taught Economics.

The authors analysed 174 economics modules for seven Russell Group [i.e top tier UK] universities, making this the most comprehensive curriculum review I know of. Focusing on the exams that undergraduates were asked to prepare for, they found a heavy reliance on multiple choice. The vast bulk of the questions asked students either to describe a model or theory, or to show how economic events could be explained by them. Rarely were they asked to assess the models themselves. In essence, they were being tested on whether they had memorised the catechism and could recite it under invigilation.

Critical thinking is not necessary to win a top economics degree. Of the core economics papers, only 8% of marks awarded asked for any critical evaluation or independent judgment. At one university, the authors write, 97% of all compulsory modules “entailed no form of critical or independent thinking whatsoever”.

I doubt it’s any better in the US.  Indeed, there’s been hard evidence at times that Ph.D programs at the top US graduate schools go out of their way to discourage creative questions, and narrowly focus on “puzzle-solving” –  manipulating models with analytical brilliance – instead. People are increasingly not taught to think through assumptions or question the boundaries and limits of the models. That’s how you go extinct.



By | February 21, 2017|Assumptions, Economics|Comments Off on How Economics dies as a discipline

Are “data driven campaigns killing the Democratic Party?”

“Data driven campaigns are killing the Democratic Party,” argues this article on Politico.  It argues Democrats need to return to story-telling, and not put so much trust in analytics and modeling.

It is absolutely true that far too many people naively trust models and big data. But I don’t think Democrat defeats are the fault of the models. Instead, it’s a matter of recognizing what such models are good for. Analytics are much better at optimizing within a particular set of rules, of squeezing out inefficiencies and inconsistencies. They can maximize a given set of variables.

But models are much less good at telling you what isn’t in the model, or recognizing new features in the environment.  So they can optimize, but they can’t help you much at adaptation, recognizing new opportunities or threats.   It’s not to say that computers can’t do this, but it’s a much more difficult problem than running statistical tests on a set of data.

Storytelling may be a more effective means of communication than a scatterplot, to be sure. But figuring out how you need to change your story or recognize something you were not seeing is a very different matter. What if the Democratic Party needs to adapt or change the mix somehow? Just changing the communication technique doesn’t help with that. You also need to think about the message.

By | February 10, 2017|Communication|Comments Off on Are “data driven campaigns killing the Democratic Party?”

A Thousand Years since the election (or so it feels)

So this is what warp drive must feel like. Doesn’t it seem as if we’ve had enough drama and high-pitched emotion since the election to have punched a hole in normal reality?  Emotions have been running very high things are getting a little distorted in this stretch of the universe. People I know on both sides are saying shocking things, more extreme than I would have ever imagined them saying.


Time to calm things down. One of the danger signs of trouble is when politics starts to become too moralized, a kind of substitute for religion. Public debate should not be a war raged over ultimate right and wrong.  That didn’t work out too well in the Wars of Religion in 17th century Europe, or the clash of ideologies in the 20th century. Indeed, a desire to calm such disputes is the reason we have supposedly secular (“post-Westphalian”) nation states  and freedom of religion in the West today. It’s not because of deep respect for faith. Instead, it turned out insisting on one correct belief had the unfortunate tendency to produce millions of corpses.  People unfortunately have an inherent tendency to get pig-headed and extreme (and hypocritical)  if they think they are guardians of all that is right and true.


Instead, a better question is: does a policy work?  Instead of saving the souls of the poor, does it at least feed them or educate them? If a plan doesn’t work, why not? If it works, are we becoming complacent?


In the same way, if you’re unhappy with Trump, what could Obama or Clinton have done differently? Why has the Davos order become so unpopular with many groups? If you’re happy with Trump, where’s he most likely to mess up?


That might be less emotionally satisfying than talking about noble principles. But it also means more attention to potential opportunities or practical problems you can fix. Success is not usually a matter of pushing harder for ultimate truth and light, or final victory for one side or the other. Instead, people fail to notice things, or wish them away. They have blind spots.  If Clinton had talked a little less about “who we are” in a general way but visited Wisconsin more often, things might be different. So why did she and almost all her experts and poillsters and big-data modelers fail to see that? Solve that one and it will probably help the Democrats more than any number of protests.


More ability to notice unwelcome things is usually more important in making things work than being righteous. It’s usually complacency and hubris that trip people up, rather than having the correct universal moral rule or ideal policy or narrative.


If you’re upset about Trump, that means you need less comforting validation of what a fine and superior person you are, how dumb and terrible the other side is, and more ways to figure out what your side did wrong, and how to fix it. That’s not easy. Trump opponents think he is pure id, dark malevolent instinct (and maybe they are right!)


No doubt Trump will be just as blind, if not more so. He will blunder into problems he doesn’t see coming, and fail to listen to people with a different perspective.  For Trump supporters, mad at the liberal media (and maybe they are right!) how do you actually restore trust in institutions instead of breaking them down? How do make change more permanent, instead of following the media cycle?


Above all, getting policy right usually does not mean bull-headed persistence and triumph of the will. It  means being able to see things from different viewpoints, so you can find blindspots or oversights or a need to adjust to circumstances instead of confirmation you are so marvelous and right. And that is more difficult than just about anything, as the extreme emotional pitch right now shows.
By | February 8, 2017|Confirmation bias, Current Events, Politics|Comments Off on A Thousand Years since the election (or so it feels)

The short interval before thinking as usual resumes

Surprises can be valuable. Even the Trumpites were surprised by the election victory. CNN, as I recall, quoted a “senior Trump campaign source” early on election night who said it would “take a miracle” to win. Meanwhile, liberals are in shock or despair. Everyone is a bit flummoxed. So what happens next?

The reaction to surprises is a very interesting and important thing to watch. There's two main paths people take.

1) try to work out why you were surprised. What did you miss? What didn't you pay attention to? What could you do differently next time?

2) struggle to reconcile events with your previous view so you retain as much of the preexisting narrative or perspective as possible.


People generally go the second route, and so they learn little or nothing from events. As Weick and Sutcliffe put it in their book Managing the Unexpected,

The moral is that perceptions of the unexpected are fleeting. When people are interrupted, they tend to be candid about what happened for a short period of time, and then they get their stories straight in ways that justify their actions and protect their reputations. And when official stories get straightened out, learning stops…In that brief interval between surprise and successful normaling lies one your few opportunities to discover what you don't know.

In all the reactions to the election, all the pieces of analysis and journalism and commentary, decide for yourself whether they are going route 1 or route 2.


By | November 10, 2016|High-Reliability Organizations, Situation Awareness|0 Comments

US election shock: You’ll forget the models were wrong within a few weeks.

If there's one thing I've consistently argued on this blog, it's that predictions are usually a waste of time and money. Instead, test your assumptions. Don't just “make assumptions explicit.” Look for how you might be wrong, because then you can do something about it.

So how did that play out, the morning after the US Presidential election? Leave aside your horror or elation. This isn't a partisan point. No matter what your politics or feelings about the result, there's a pattern of bad decisions and misjudgment here. And everyone will also forget that pattern within weeks.


With hours to go before Americans vote, Democrat Hillary Clinton has about a 90 percent chance of defeating Republican Donald Trump in the race for the White House, according to the final Reuters/Ipsos States of the Nation project.

The Huffington Post put Clinton's chances at 98%. (98%!)

The HuffPost presidential forecast model gives Democrat Hillary Clinton a 98.2 percent chance of winning the presidency. Republican Donald Trump has essentially no path to an Electoral College victory.

Huffpo also rather sneeringly attacked Nate Silver's 538 for estimating Clinton's chances at a mere 65%.

While I love following the prediction markets for this year’s election, the most popular and widely quoted website out there,, has something tragically wrong with its presidential prediction model. With the same information, 538 is currently predicting a 65 percent chance of a Clinton victory

As for The NY Times, their final prediction was

“Hillary Clinton has an 85% chance to win”

It's easy to criticize in hindsight. But why do people keep doing this? Why do naive people keep believing this kind of faux-technocratic nonsense? It just leads people to damaging self-delusion, not just in politics but in business and markets.

Elaborate models and data are no defense against wishful thinking. “Big data” does not protect you against many kinds of error. Monte Carlo simulations can be foolish. How could people possibly put a 98% chance on an election that was close to the margin of error in the polls, especially after the lessons of the shock results of Brexit, the Greek referendum and many others?

But they did. Financial markets were bamboozled, for example. Again.

Reuters: Wall Street Elite stunned by Trump triumph.

We need a better way to do this. Instead of models, you need an antimodel, which is what I am developing.

By | November 9, 2016|Assumptions, Confirmation bias, Forecasting, Politics|0 Comments

Forecasting your way to Foolishness

I was  arguing in the last post that forecasts are much less useful for monetary policy  than people think. This is of course anathema and unthinkable to many people. The most fashionable current monetary framework, inflation targeting (or potential variants like nominal GDP targeting) are entirely reliant on forecasting the economy 1-2 years ahead. Hundreds of people are employed in central banks to do such projections. The process has the surface appearance of rigor and seriousness and technical knowledge. Monetary policy only has an impact with a lag, and those lags are famously long and variable. So, the argument goes, use of forecasts is essential.

This is almost universally accepted, but dead wrong. People overemphasize the relatively consistent lag and underemphasize the “variable” element. It is not just that economic forecasts of the future are notoriously inaccurate and unreliable. Our understanding of the transmission process from policy instruments to the real economy is also alarmingly vague, as the debates over the impact of QE showed.

That is an argument for caution, rather than technocratic overconfidence that we can predict inflation or GDP to a decimal point or two two years out. A less overconfident central bank is less likely to make serious policy errors. The development of precise models and projections tends to make people highly overconfident, however.

Standard academic thinking about monetary policy, with its targets and  policy rules,  is in fact a generation behind the rest of society. Most of business abandoned formal, rigorous planning methods based on forecasts and targets in the 1980s and 1990s, as Henry Mintzberg showed.  Corporations fired most of their economic forecasters and planners. Such formal methods had turned out to be mostly disastrous in practice. It made it more likely that people would ignore crucial new data, not less.

In fact, smarter central bankers tend to acknowledge the limits of projections. As they see it, the real value of projections is a matter of imposing consistency on the central banks outlook, rather than being able to confidently predict the future. It is a way of adding up the current data from different sectors of the economy to produce a unified picture.

But that could be done by simply using outside commercial forecasts, or international forecasts by bodies like the IMF or OECD. Central bank forecasts often perform very slightly better than individual outside forecasts, but hardly commensurate with the staff  resources and attention devoted to them. Averaging different forecasts is usually more accurate than any single forecast in any case.

Central banks shouldn’t be banned from looking at outside forecasts. They should just be forced to pay much less attention to forecasting and projections in general.

In any case, consistency is overrated in practice. Setting interest rates is not like proving a mathematical theorem.  Imposing consistency is often a way to ignore trade-offs or puzzles or genuine disagreement.

Forecasts are often  more of a distraction than an aid. Central banks actually tend to make decisions in a very different way in practice, as Lindblom argued decades ago.  They mostly make successive limited comparisons, because in practice it is too hard and unreliable to do anything else. No central bank makes decisions in an automatic way based on the forecast or a policy rule alone. They get into trouble when they rely on their consistent models too much, and think too little about the flaws or unexpected developments. In other words, using elaborate forecasts is a sign of ineptitude, not practical skill.

That also means markets misunderstand practical central bank policy when they think the models are as important as the staff economists who produce claim, or trust the official accounts of all the meetings that go into the forecast round.  As often happens, the way things happen is often different to the version in the official description or organization chart, and often even different to what people tell themselves they are doing.

If you can’t reliably predict, you need ways to control your exposure and adapt. “First, do no harm” is the best rule for monetary policy, not elaborate technical theater.


By | September 28, 2016|Central Banks, Economics, Federal Reserve, Forecasting, Monetary Policy|Comments Off on Forecasting your way to Foolishness

Let’s ban forecasts in central banks

People should learn from their mistakes, or so we usually all agree. Yet that mostly doesn’t happen. Instead, we get disturbing “serenity” and denial, and we had a prime example of it this week. So it is crucial we develop ways to make learning from mistakes more likely. I’d ban forecasts altogether in central banks if it would make officials pay more attention to what surprises them.

The most powerful institutions in the world economy can’t predict very well. But at least they could learn to adjust to the unexpected.

The Governor of the Bank of England, Mark Carney, testified before Parliament this week to skeptical MPs. The Bank, along with the IMF, Treasury, and other economists, predicted near-disaster if the UK voted for Brexit. So far, however, the UK economy is surprising everyone with its resilience.

So did Carney make a mistake? According to the Telegraph,

If Brexiteers on the Commons Treasury Committee were hoping for some kind of repentance, or at least a show of humility, they were to be sorely disappointed. Mr Carney was having none of it. At no stage had the Bank overstepped the mark or issued unduly alarmist warnings about the consequences of leaving, he insisted. He was “absolutely serene” about it all.

This is manifestly false and it did not go down well, at least with that particular opinion writer.

Arrogant denial is, I suppose, part of the central banker’s stock in trade. If a central bank admits to mistakes, then its authority and mystique is diminished accordingly.

I usually have a lot of regard for Carney, and worked at the Bank of England in the 1990s. But this response makes no sense. Central banking likes to think of itself as a technical trade, with dynamic stochastic general equilibrium models and optimum control theories. Yet the core of it has increasingly come down to judging subjective qualities like credibility, confidence, and expectations.

Economic techniques are really no use at all for this.  Credibility is not a technical matter of commitment, time consistency and determination, as economists often think since Kydland & Prescott. It is much more a matter of whether people consider you are aware of the situation and can balance things appropriately, not bind yourself irrevocably to a preexisting strategy or deny mistakes.  It is as much a matter of character and honesty as persistence.

The most frequent question hedge funds used to ask me about the Fed or other central banks was “do they see x?”  What happens if you are surprised? Will you ignore or deny it and make a huge mistake?  Markets want to know that central banks are alert, not stuck in a rut.  They want to know if officials are actively testing their views, not pretending to be omniscient. People want to know that officials aren’t too wrapped up in a model or theory or hiding under their desks instead of engaging with the real world.

It might seem as if denial is a good idea, at least in the short term. But it is the single most durable and deadly mistake in policymaking over the centuries. The great historian Barbara Tuchman called it “wooden-headedness,” or persistence in error.

The Bank of England, like other monetary authorities, issues copious Inflation Reports and projections and assessments. But it’s what they don’t know, or where they are most likely to miss something, which is most important. Perhaps the British press is being too harsh on Carney. Yet central banks across the world have hardly distinguished themselves in the last decade.

We need far fewer predictions in public policy, and far more examination of existing policy and how to adjust it in response to feedback. Forget about intentions and forecasts. Tell us what you didn’t expect and didn’t see, and what you’re going to do about it as a result. Instead of feedforward, we need feedback policy, as Herbert Simon suggested about decision-making.  We need to adapt, not predict. That means admitting when things don’t turn out the way you expected.

By | September 10, 2016|Adaptation, Central Banks, Communication, Decisions, Economics, Forecasting, Time inconsistency|Comments Off on Let’s ban forecasts in central banks

The Toxic Impact of News

One advantage of summer travel is it gives you extra perspective on the media frenzy back home. I was in Kyoto, Japan during the party conventions, and from thousands of miles away the political reporting seemed even more overdramatized and pointless than usual. How much do we know now about the US Presidential race that we didn’t three months ago? How much does today’s political news cycle affect an election still more than two months away?

Very little.

Rolf Dobelli wrote a book, The Art of Thinking Clearly, that examined 99 biases. He saved what he thought was perhaps the most important for last.

We are incredibly well informed, yet we know incredibly little. Why? Because two centuries ago, we invented a toxic form of knowledge called “news.” News is to the mind what sugar is to the body: appetizing, easy to digest— and highly destructive in the long run.

News is irrelevant and a waste of time, he argues.

In the past twelve months, you have probably consumed about ten thousand news snippets— perhaps as many as thirty per day. Be very honest: Name one of them, just one that helped you make a better decision— for your life, your career, or your business— compared with not having this piece of news. No one I have asked has been able to name more than two useful news stories— out of ten thousand.

Of course, in financial markets there are plenty of people who obsessively track every small piece of information, although algorithms react to snippets of news far faster than any trader these days.

So what kind of information is useful? It is information that lets you solve problems, and that usually means information that helps you test your assumptions and approach. But testing assumptions is usually the last thing that people do. All that political reporting usually just confirms what people think they already know.

By | August 30, 2016|Decisions|Comments Off on The Toxic Impact of News

The flaw in international law, and The Chilcot Report

People pay too much attention to their forecasts (which are unreliable) and too little to their assumptions, and that often gets them into serious trouble. I argued in the last post that the assumption driving much EU integration – that international law and international organization is the foundation of the last seventy years of peace in Europe – is not always true.

So what else may have kept the peace in Europe for the last seventy years? What worked, if international law sometimes doesn't work? Think for a moment.

It isn't the same as the question of what you think international law is ideal or moral aspiration or a nice idea, but, again, what actually works. We all know people who are wonderfully nice, but maybe should not be entrusted with arranging your summer trip, or running a company, or handling air traffic control for inbound flights at LaGuardia. You may think it is ideal and moral that everyone should be honest as well. But you probably locked your front door when you left home this morning too. So what actually kept the peace, if not the EU?

Might it have something to do with the US deploying hundreds of thousands of troops in Europe, a chain of air bases from Keflavik in Iceland to Incirlik in Turkey, and the Sixth Fleet in the Mediterranean? Not to mention the threat of thermonuclear escalation if anyone started a war. The US assumed much of the security of Europe, and strongly supported European rebuilding from the Marshall plan onwards, as well as the EU itself as a bulwark against communism. The Red Army might have been entirely unthreatening and peaceful and admired European law, but the citizens of Budapest and Prague who saw Soviet tanks on their streets in 1956 and 1967 might disagree. Yet western European countries could afford to reduce defense spending and focus on welfare and economics. In other words, the EU itself is more a symptom of the US stabilizing the security situation than the cause of security.

Let's say you splutter with outrage at the idea. There are definitely some people in Europe and elsewhere who are very uncomfortable with any positive consequence of Ameican foreign policy, ever. Fine. How would you test that? What kind of implications would you expect to see? The explanations lead very different places and feed different narratives. Seeing the question from different angles and questioning assumptions is usually essential to figuring out the right policy. And the things you feel uncomfortable about are the most likely place for blind spots, because you never look there.

In the same way, the reaction to the publication of the Chilcot report on British participation in the Iraq war was published yesterday. Most of the attention, like this Guardian editorial, is focused on poor prediction of consequences.

Let's agree the war was bad in retrospect. It is also clear that there was not enough effort to question the assumptions underlying intelligence assessments that Saddam Hussein still had weapons of mass destruction, or prepare for the aftermath.

But the press reaction doesn't really come to grips with a recurrent theme in the executive summary of the report. Why did Blair, a European multilateral liberal, stick so close to Bush, a Texan Republican? Was it to preserve the special relationship? Get invited to delightful Crawford, TX? Be a poodle and get dog biscuits?

Most media reactions lean towards thinking it was because Blair was a pathological liar, a vain foolish potential war criminal who ignored advice. They personalize the issue. But Blair was a highly skilled, highly popular leader before the war, not a cartoon villain, and he clearly had doubts about direct UK interests in Iraq. So what was he thinking?

In fact, Chilcot documents how Blair kept trying to push the US to go the multilateral route, to get UN resolutions, to persuade a coalition of allies rather than take unilateral action.

The report references a 2003 speech by Blair several times.

370. In Mr Blair’s view, the decision to stand alongside the US was in the UK’s long‑term national interests. In his speech of 18 March 2003, he argued that the handling of Iraq would:

“… determine the way in which Britain and the world confront the central security threat of the 21st century, the development of the United Nations, the relationship between Europe and the United States, the relations within the European Union and the way in which the United States engages with the rest of the world. So it could hardly be more important. It will determine the pattern of international politics for the next generation.”

In other words, it wasn't really about imminent threats from Iraq or whether it had WMD or supported terrorism for Blair. At best, those were fig leaves or PR concerns. It wasn't even primarily about the effect of disagreement on US-UK relations. It was to get the Americans to follow the norms of international law. It was to stop them acting outside the multilateral framework.

So consider this: international law didn't stop the Iraq war, because the Americans felt they couldn't rely on the UN framework. And Blair, as an internationalist progressive, went along to try to make sure the “pattern of international politics for the next generation” was based on international law and multilateral organizations. He tried to rein back American unilateral use of force by participating as a junior partner, to preserve international norms, albeit not enough for domestic opponents or some other EU governments.

So international law did not lead to peace but was the cause of at least UK participation in the Iraq war. Uncomfortable? Fine. But Blair might have stumbled into huge mistakes because of his assumptions. Forecasts and data and judgements got altered to fit them.

And that happens all the time.


By | July 7, 2016|Assumptions, Decisions, Europe, Security|0 Comments