Let’s ban forecasts in central banks

People should learn from their mistakes, or so we usually all agree. Yet that mostly doesn’t happen. Instead, we get disturbing “serenity” and denial, and we had a prime example of it this week. So it is crucial we develop ways to make learning from mistakes more likely. I’d ban forecasts altogether in central banks if it would make officials pay more attention to what surprises them.

The most powerful institutions in the world economy can’t predict very well. But at least they could learn to adjust to the unexpected.

The Governor of the Bank of England, Mark Carney, testified before Parliament this week to skeptical MPs. The Bank, along with the IMF, Treasury, and other economists, predicted near-disaster if the UK voted for Brexit. So far, however, the UK economy is surprising everyone with its resilience.

So did Carney make a mistake? According to the Telegraph,

If Brexiteers on the Commons Treasury Committee were hoping for some kind of repentance, or at least a show of humility, they were to be sorely disappointed. Mr Carney was having none of it. At no stage had the Bank overstepped the mark or issued unduly alarmist warnings about the consequences of leaving, he insisted. He was “absolutely serene” about it all.

This is manifestly false and it did not go down well, at least with that particular opinion writer.

Arrogant denial is, I suppose, part of the central banker’s stock in trade. If a central bank admits to mistakes, then its authority and mystique is diminished accordingly.

I usually have a lot of regard for Carney, and worked at the Bank of England in the 1990s. But this response makes no sense. Central banking likes to think of itself as a technical trade, with dynamic stochastic general equilibrium models and optimum control theories. Yet the core of it has increasingly come down to judging subjective qualities like credibility, confidence, and expectations.

Economic techniques are really no use at all for this.  Credibility is not a technical matter of commitment, time consistency and determination, as economists often think since Kydland & Prescott. It is much more a matter of whether people consider you are aware of the situation and can balance things appropriately, not bind yourself irrevocably to a preexisting strategy or deny mistakes.  It is as much a matter of character and honesty as persistence.

The most frequent question hedge funds used to ask me about the Fed or other central banks was “do they see x?”  What happens if you are surprised? Will you ignore or deny it and make a huge mistake?  Markets want to know that central banks are alert, not stuck in a rut.  They want to know if officials are actively testing their views, not pretending to be omniscient. People want to know that officials aren’t too wrapped up in a model or theory or hiding under their desks instead of engaging with the real world.

It might seem as if denial is a good idea, at least in the short term. But it is the single most durable and deadly mistake in policymaking over the centuries. The great historian Barbara Tuchman called it “wooden-headedness,” or persistence in error.

The Bank of England, like other monetary authorities, issues copious Inflation Reports and projections and assessments. But it’s what they don’t know, or where they are most likely to miss something, which is most important. Perhaps the British press is being too harsh on Carney. Yet central banks across the world have hardly distinguished themselves in the last decade.

We need far fewer predictions in public policy, and far more examination of existing policy and how to adjust it in response to feedback. Forget about intentions and forecasts. Tell us what you didn’t expect and didn’t see, and what you’re going to do about it as a result. Instead of feedforward, we need feedback policy, as Herbert Simon suggested about decision-making.  We need to adapt, not predict. That means admitting when things don’t turn out the way you expected.

By | September 10, 2016|Adaptation, Central Banks, Communication, Decisions, Economics, Forecasting, Time inconsistency|Comments Off on Let’s ban forecasts in central banks

The Toxic Impact of News

One advantage of summer travel is it gives you extra perspective on the media frenzy back home. I was in Kyoto, Japan during the party conventions, and from thousands of miles away the political reporting seemed even more overdramatized and pointless than usual. How much do we know now about the US Presidential race that we didn’t three months ago? How much does today’s political news cycle affect an election still more than two months away?

Very little.

Rolf Dobelli wrote a book, The Art of Thinking Clearly, that examined 99 biases. He saved what he thought was perhaps the most important for last.

We are incredibly well informed, yet we know incredibly little. Why? Because two centuries ago, we invented a toxic form of knowledge called “news.” News is to the mind what sugar is to the body: appetizing, easy to digest— and highly destructive in the long run.

News is irrelevant and a waste of time, he argues.

In the past twelve months, you have probably consumed about ten thousand news snippets— perhaps as many as thirty per day. Be very honest: Name one of them, just one that helped you make a better decision— for your life, your career, or your business— compared with not having this piece of news. No one I have asked has been able to name more than two useful news stories— out of ten thousand.

Of course, in financial markets there are plenty of people who obsessively track every small piece of information, although algorithms react to snippets of news far faster than any trader these days.

So what kind of information is useful? It is information that lets you solve problems, and that usually means information that helps you test your assumptions and approach. But testing assumptions is usually the last thing that people do. All that political reporting usually just confirms what people think they already know.

By | August 30, 2016|Decisions|Comments Off on The Toxic Impact of News

The flaw in international law, and The Chilcot Report

People pay too much attention to their forecasts (which are unreliable) and too little to their assumptions, and that often gets them into serious trouble. I argued in the last post that the assumption driving much EU integration – that international law and international organization is the foundation of the last seventy years of peace in Europe – is not always true.

So what else may have kept the peace in Europe for the last seventy years? What worked, if international law sometimes doesn't work? Think for a moment.

It isn't the same as the question of what you think international law is ideal or moral aspiration or a nice idea, but, again, what actually works. We all know people who are wonderfully nice, but maybe should not be entrusted with arranging your summer trip, or running a company, or handling air traffic control for inbound flights at LaGuardia. You may think it is ideal and moral that everyone should be honest as well. But you probably locked your front door when you left home this morning too. So what actually kept the peace, if not the EU?

Might it have something to do with the US deploying hundreds of thousands of troops in Europe, a chain of air bases from Keflavik in Iceland to Incirlik in Turkey, and the Sixth Fleet in the Mediterranean? Not to mention the threat of thermonuclear escalation if anyone started a war. The US assumed much of the security of Europe, and strongly supported European rebuilding from the Marshall plan onwards, as well as the EU itself as a bulwark against communism. The Red Army might have been entirely unthreatening and peaceful and admired European law, but the citizens of Budapest and Prague who saw Soviet tanks on their streets in 1956 and 1967 might disagree. Yet western European countries could afford to reduce defense spending and focus on welfare and economics. In other words, the EU itself is more a symptom of the US stabilizing the security situation than the cause of security.

Let's say you splutter with outrage at the idea. There are definitely some people in Europe and elsewhere who are very uncomfortable with any positive consequence of Ameican foreign policy, ever. Fine. How would you test that? What kind of implications would you expect to see? The explanations lead very different places and feed different narratives. Seeing the question from different angles and questioning assumptions is usually essential to figuring out the right policy. And the things you feel uncomfortable about are the most likely place for blind spots, because you never look there.

In the same way, the reaction to the publication of the Chilcot report on British participation in the Iraq war was published yesterday. Most of the attention, like this Guardian editorial, is focused on poor prediction of consequences.

Let's agree the war was bad in retrospect. It is also clear that there was not enough effort to question the assumptions underlying intelligence assessments that Saddam Hussein still had weapons of mass destruction, or prepare for the aftermath.

But the press reaction doesn't really come to grips with a recurrent theme in the executive summary of the report. Why did Blair, a European multilateral liberal, stick so close to Bush, a Texan Republican? Was it to preserve the special relationship? Get invited to delightful Crawford, TX? Be a poodle and get dog biscuits?

Most media reactions lean towards thinking it was because Blair was a pathological liar, a vain foolish potential war criminal who ignored advice. They personalize the issue. But Blair was a highly skilled, highly popular leader before the war, not a cartoon villain, and he clearly had doubts about direct UK interests in Iraq. So what was he thinking?

In fact, Chilcot documents how Blair kept trying to push the US to go the multilateral route, to get UN resolutions, to persuade a coalition of allies rather than take unilateral action.

The report references a 2003 speech by Blair several times.

370. In Mr Blair’s view, the decision to stand alongside the US was in the UK’s long‑term national interests. In his speech of 18 March 2003, he argued that the handling of Iraq would:

“… determine the way in which Britain and the world confront the central security threat of the 21st century, the development of the United Nations, the relationship between Europe and the United States, the relations within the European Union and the way in which the United States engages with the rest of the world. So it could hardly be more important. It will determine the pattern of international politics for the next generation.”

In other words, it wasn't really about imminent threats from Iraq or whether it had WMD or supported terrorism for Blair. At best, those were fig leaves or PR concerns. It wasn't even primarily about the effect of disagreement on US-UK relations. It was to get the Americans to follow the norms of international law. It was to stop them acting outside the multilateral framework.

So consider this: international law didn't stop the Iraq war, because the Americans felt they couldn't rely on the UN framework. And Blair, as an internationalist progressive, went along to try to make sure the “pattern of international politics for the next generation” was based on international law and multilateral organizations. He tried to rein back American unilateral use of force by participating as a junior partner, to preserve international norms, albeit not enough for domestic opponents or some other EU governments.

So international law did not lead to peace but was the cause of at least UK participation in the Iraq war. Uncomfortable? Fine. But Blair might have stumbled into huge mistakes because of his assumptions. Forecasts and data and judgements got altered to fit them.

And that happens all the time.


By | July 7, 2016|Assumptions, Decisions, Europe, Security|0 Comments

How side-effects drive history (and Brexit)

It’s often the side-effects of decisions, mostly overlooked at the time, which turn out to be most significant. Polls are showing a significant lead for Brexit this morning, which would be one of the biggest geopolitical shocks of the decade. Of course, trusting polls has been a bad idea in recent times, and there are many who think the UK will draw back at the last moment. But let’s say there is at least some chance Britain may exit. How did this happen? It’s a chain of side-effects.

In October 1973, Syria and Egypt launched a surprise attack against Israel. The US supplied arms to Israel to defend itself.

One side-effect was an oil embargo by Arab oil producers against Western countries, which led to the first oil shock and a quadrupling in the price of oil. That naturally led to a huge transfer of wealth to the oil producers, including Saudi Arabia.

One side-effect was a huge increase in the influence and power of the Saudis, one of the most backward and retrogressive parts of the Islamic world, with “kings” allied to perhaps the most puritanical, backward religious sect in all of Islam. It as if in the United States Bo and Luke Duke suddenly become  multibillionaire monarchs, kept in power by paying billions to the Ku Klux Klan every year.

One side-effect was many billions of dollars were paid by the Saudis to promote the least tolerant, most aggressive forms of Islam all around the world.


One side-effect was the rise of Islamic terrorism,  like (Saudi) Osama Bin Laden, who attacked American targets culminating in 9/11.

A side-effect of 9/11 was the US attacked Iraq, a secular dictatorship which had not been directly involved in the strike on the US.  The US won a decisive victory and overthrew Saddam Hussein much faster and with fewer losses than detractors forecast. Overconfident US officials removed Ba’ath party officials from the Iraqi government and Sunnis feared they would lose their traditional dominance of the country.

One side-effect was Iraq was destabilized and slid into a civil war that trapped the US for a decade, costing thousands of US military casualties and several trillion dollars which had not been anticipated.

One side-effect was the US public became wholly averse to more boots-on-the-ground in the Middle East. Another side-effect was the turmoil in Iraq eventually spread to Syria. But public resistance meant the US refused to commit military forces, as did the UK and other EU countries.

One side effect was a breakdown in Syria, and a huge wave of refugees that headed towards Europe. Angela Merkel believed setting no limits on refugee numbers was a moral choice, and over a million refugees flooded into Germany.

One side-effect was that European public opinion became agitated and alarmed about the migrant influx, which appears to many to be accelerated by the EU’s open borders under the Schengen agreement.  Populists who had already made advances over the previous decade suddenly benefitted from a new resurgence of public support. Meanwhile, Merkel dealt with concerns about Syrian migration by promising visa-free entry to Europe to Turks. It appeared to many that she and the EU had lost control of borders.

One side-effect was that immigration began to dominate over economic consequences in the UK Brexit debate, a focus that boosted the “Leave” side in the final weeks before the vote. The British, already dealing with heavy migration from within the EU,  feared they could not control their borders.

So a “leave” vote in Britain is now possible, partly caused by a civil war thousands of miles away in Damascus and Egyptian attacks on the Sinai forty years ago.

And one side effect may be similar referendums in other countries and a partial break up of the EU itself.  In retrospect it is possible that Merkel’s decision to admit refugees without limit may have as a side-effect unintentionally wrecked seven decades of German promotion of EU integration.

Of course, you can dispute the exact causation and many other factors were involved too.  You could  easily construct other chains of unintended side-effects and argue it different ways, and it’s partially a game. The point, however, is that direct choices and intentions and calculations are only a small part of what happens in the international economy and international affairs. It’s often the side-effects that matter most. Chains of cause and consequence  quickly get too involved and intricate for anyone to figure out, often even in retrospect let alone predicting the future.

I’ve often argued overconfident prediction is usually a sign of self-delusion.  In fact,  it’s often the things that don’t even occur to us to predict that matter most, not even just the things we recognize we get wrong.

So it’s not your models or forecasts or ideologies that matter. Instead, it is being on the lookout for side-effects and unintended consequences, especially those you’d prefer not to see at all. If you see them, you can at least try to do something about them. Instead, most elites blunder forward blindly, clinging to their preferred model and plans.




By | June 11, 2016|Assumptions, Current Events, Decisions, Systems and Complexity|Comments Off on How side-effects drive history (and Brexit)

Believing your own propaganda

I argued in the last post that people with strong principles, like much of the GOP establishment, often end up with strong blindspots too. More educated groups frequently like to think in terms of abstract principles. But it often needs a bit of a leap to see how other people think about their ideals.  If you’re a liberal, for example, you might find it distasteful to imagine that the GOP establishment has any principles at all, rather than simply manipulating their followers to make themselves rich.

You could grammatically decline the thought pattern like this:

  • I selflessly act according to enlightened, timeless principles which are proven to enhance the common good.
  • You are a rigid ideologue who has got stuck in an interlocking spider web of false (if sincere) beliefs and can’t see the truth.
  • He or She is a hypocrite and  cynical manipulator who talks about principles and ideals only to conceal or distract people from their vicious plans and nasty self-interest.

Of course, depending who you ask, these can all refer to the same person. Your own lofty principles may look like the most base self-interest to others. That’s normal politics.

People’s beliefs are mostly sincere.  After all, people can enact beliefs and appeal to moral principles to justify their self-interest without even being aware they are doing so.

This, however, is the problem. Your own principles may bring you comfort, and reassurance, and serve as a motivating force for your own side, convincing your allies of they righteousness. They may fit together in a coherent, elegant way, and reinforce each other.

But they are usually not much good for persuading anyone else, who appeal to their own different principles. Nor, partly because of that fact,  is intense belief in your own principles much good for seeing potential problems or things that can go wrong, or things that don’t fit into your assumptions.

But much of politics, and sometimes business decision making,  is based on exactly such strong principles and ideologies. Consider “maximize shareholder value” as a business principle, for example.

It makes such belief frameworks  fragile and more likely to produce huge mistakes, because they don’t see things coming that don’t easily fit their neat framework.  It also makes it more difficult to find compromises.

To make things work or be successful, you have to be able to see gaps and flaws and potential problems, and do something about them.  It doesn’t mean the other side is right, or your own principles aren’t valuable.  But most of the pressure – to seem confident, in control, to have the right answers and the right moral stance – tends to blind people to the things which sneak up unseen and can wreck their plans and doom their hopes.

You just have to find a way to be alert.


By | March 14, 2016|Assumptions, Decisions|Comments Off on Believing your own propaganda

Woodenheaded Disasters

So we have a nervous start to the New Year, with a plunge in the Chinese stock market and tensions in the Gulf. There is a widespread sense that the establishment in many countries is “out of touch” and leadership is faltering. I was arguing the other day that there is often a deeper pattern to these problems. The Republican establishment in the US ignored all evidence that didn’t match their preconceptions until an anointed prince like Jeb Bush  was running at only 4% in primary polling.

This is in fact one of the deepest patterns in history. The great historian Barbara Tuchman pondered in The March of Folly: From Troy to Vietnam why policymakers and leaders so often do things which seem self-defeating and stupid.

A phenomenon notable throughout history regardless of place or period is the pursuit by governments of policies contrary to their own interests.. Why does intelligent mental process seem so often not to function?”

She called it Wooden-headedness.

Wooden headedness, the source of self deception is a factor that plays a remarkably large role in government. It consists in assessing a situation in terms of preconceived fixed notions while ignoring or rejecting any contrary signs.  It is acting according to wish while not allowing oneself to be deflected by the facts.

It is all too easy to decide a policy was wrong in retrospect, of course. But wooden-headedness means policies or decisions which are self-defeating and ruinous based on things which are clearly apparent at the time.  And it is remarkably common.  She traces it from ancient Greece through the policies of Renaissance Popes, Phillip II of Spain and the decision of the Japanese government to go to war with the United States in 1941.  US intervention in Vietnam in turn, led by the “best and the brightest” in the Kennedy administration, was beset by folly.

Folly’s appearance is independent of era or locality; it is timeless and universal, although the habits and beliefs of a particular time determine the form it takes.

One of the most interesting examples is the folly of the British government in its policy on its American colonies. The governing elite believed that trade with and possession of the thirteen colonies was utterly essential to Britain’s wealth and future, but insisted on the right to tax without the colonist’s consent.   It was, she says, the unworkable pursued at the expense of the possible.

Instead of confronting trade-offs or looking for alternatives, politicians in London were largely diverted by the game of faction, who’s in, who’s out.  And here is the most remarkable fact she notes: No British ministers visited America between 1763 and 1775 despite thinking the fate of the empire depended on possession of America.

There is often remarkable reluctance to go and look at the facts on the ground with a fresh eye. And it is not easy to do, whether it is sailing the Atlantic in the 1760s or working out why a business product is withering in the marketplace.  But it is also essential. Instead of prediction, it is a matter of taking a fresh look at what is already there. It is about discovering what you’re not seeing. It is about blind spots.

Most predictions about what will happen in 2016 will tun out to be wrong, of course. But at least we can try to look for contrary evidence and test assumptions about views, so we are not woodenheaded. Like most establishments and bureaucracies.


By | January 4, 2016|Assumptions, Books, Decisions|Comments Off on Woodenheaded Disasters

There is a deeper pattern in why the GOP Establishment is in trouble

“What went wrong?” is the underlying question in David Frum’s much-talked about piece in the Atlantic on the “Republican Revolt”.  How could an establishment candidate like Jeb Bush, who was expected to be almost irresistible and has raised more than $100 million now be running at 3-4% in the polls? How can the GOP primary race have been hijacked by a reality TV star at the expense of experienced Governors and Senators?

Let’s leave aside the betting on who will finally get the nomination, or how good or bad Trump is, as  most of the media focuses on little else and most journalistic speculation is essentially useless. To be sure, despite his consistent lead, the Donald may not be inevitable as the field thins out and the ‘ground game’ of turnout becomes important.

It’s just that as of now, some of the most powerful, elite  and supposedly expert people in US politics look like losers. This is really not where they want to be.

So let’s coolly step back and  look at the pattern. How could the establishment miscalculate so badly?  What does this tell us about why decisions go wrong? Could other elites have the same problem?

In a nutshell, people refused to see contrary evidence.

Many establishment policies were not popular with the GOP base, Frum says. Less than 17% favored cuts in social security, for example.  Most wanted more deportations of illegal immigrants, the exact opposite of a pathway to citizenship.

As a class, big Republican donors could not see any of this, or would not. So neither did the politicians who depend upon them. Against all evidence, both groups interpreted the Tea Party as a mass movement in favor of the agenda of the Wall Street Journal editorial page. One of the more dangerous pleasures of great wealth is that you never have to hear anyone tell you that you are completely wrong.

They could not see things that did not fit in their frame. They could not learn from errors or defeats. The establishment had been shocked at Romney’s loss in 2012, for example.

And yet, within hours of Romney’s defeat, Republican donors, talkers, and officials converged on the maximally self-exculpating explanation.

That meant Republican leaders decided the problem was Romney’s talk about more immigration enforcement alienating Latinos, the very issue where the establishment differed most with their base and where hard evidence of votes to be gained in the center was (Frum says)  mostly lacking.

Otherwise, the party yielded on nothing and doubled down on everything. No U-turns. No compromises.

Instead of adjusting to minimize or forestall the chance of a revolt, or finding a smart alternative way forward, the leadership  interpreted things in self-serving terms and escalated.

This, of course, is a problem that is extremely widespread and not confined to the GOP. We saw exactly the same thing on all sides in the last midterm elections.

Perhaps the establishment will be able to adapt now that their problem is (you would think) undeniable and it is darkest before dawn for them. Or they can double down again. But serious damage has been done, and some ground rules of US politics – like the importance of raising money – have been rewritten.

Here’s the takeaway. Once again we see in this example that the fundamental problem with decisions is not really bias, or lack of formal rigor, or failure to gather data. It’s that people most often  don’t change their mind in response to evidence. Or  they fail to adapt until so late in the game that all the choices are bad. That’s what we need to fix, and would save countless billions of dollars and tens of thousands of companies and careers.

The most brilliant investors intuitively realize this. But as this incident demonstrates, most leaders and managers and policymakers do not. They are surrounded by yes-men. They stick with the familiar. They are clever enough to explain away facts which do not fit their narrative.

People get stuck, and persist too long in self-delusion. They fail to adapt and move when they still have the chance. If you can mitigate that, you can do more than most crystal balls could ever do. After all, if the only thing you see in a crystal ball is your own wishful thinking, what good is it?


By | December 27, 2015|Confirmation bias, Current Events, Decisions, Politics|Comments Off on There is a deeper pattern in why the GOP Establishment is in trouble

Rogues or Blind Spots?

I looked at how Volkswagen could go so wrong the other day. There is almost always a rush to blame human error or subordinates, I said. Some of them may be genuinely criminal and deserve jail time. But the problem is more usually also systemic: management doesn't see or want to see problems coming.

Now here's a piece in Harvard Business Review on the issue. Of course, it was rogue employees, says Volkswagen management.

Testifying unhappily before America’s Congress, Volkswagen of America CEO Michael Horn adamantly and defiantly identified the true authors of his company’s disastrous “defeat device” deception: “This was not a corporate decision. No board meeting or supervisory meeting has authorized this,” Horn declared. “This was a couple of rogue software engineers who put this in for whatever reason.”

Ach, du lieber! Put aside for the moment what this testimony implies about the auto giant’s purported culture of engineering excellence. Look instead at what’s revealed about Wolfsburg’s managerial oversight: utter and abysmal failure. No wonder Chairman and CEO Martin Winterkorn had to resign. His “tone at the top” let roguery take root.

The author is an MIT expert on the software processes at issue.

Always look to the leadership. Where were Volkswagen’s code reviews? Who took pride and ownership in the code that makes Volkswagen and Audi cars run? For digitally-driven innovators, code reviews are integral to healthy software cultures and quality software development.

Good code is integral to how cars work now, he says. And to write good code the Googles and Facebooks of the world have code review systems with some form of openness, even external advice or review, so that murky code is found out.

As we learned from financial fiascoes and what will be affirmed as Volkswagen’s software saga unwinds, rogues don’t exist in spite of top management oversight, they succeed because of top management oversight.

It can be comforting , in a way, to think that problems or bad decisions occur only because of individual stupidity or bias or error or ignorance. If people, and organizations as a whole don't even consciously see many problems coming, or ignore trade-offs, it's more disturbing and harder to solve. Most information and analysis will tend to reinforce their point of view. Single-minded mania also often produces short-run financial success.

Until the darkness comes. Leaders have to be held accountable for finding their blind spots. They can't claim ignorance after the fact.


Volkswagen: What were they thinking?

It may turn into one of the most spectacular corporate disasters in history. What were Volkswagen thinking? Even after it became apparent that outsiders had noticed a discrepancy in emissions performance in on-the-road tests, the company still kept stonewalling and continued to sell cars with the shady software routines.

We won't know the murky, pathological details for a while. But understanding how this happens is urgent. If you ignore this kind of insidious problem, billion-dollar losses and criminal prosecutions can occur.

In fact, it's usually not just one or two “bad apples,” unethical criminals who actively choose stupid courses of action, although it often suits politicians and media to believe so. It's a system phenomenon, according to some of the classic studies (often Scandanavian) like Rasmussen and Svedung.

.. court reports from several accidents such as Bhopal, Flixborough, Zeebrügge, and Chernobyl demonstrate that they have not been caused by a coincidence of independent failures and human errors. They were the effects of a systematic migration of organizational behavior toward accident under the influence of pressure toward cost-effectiveness in an aggressive, competitive environment.

It's not likely anyone formally sat down and did an expected utility calculation, weighting financial and other benefits from installing cheat software, versus chances of being found out times consequent losses. So the usual way of thinking formally about decisions doesn't easily apply.

It's much more likely that it didn't occur to anyone in the company to step back and think it through. They didn't see the full dimensions of the problem. They denied there was a problem. They had blind spots.

It can often be hard to even find any point at which decisions were formally made. They just … happen. Rasmussen & co again:

In traditional decision research ‘decisions’ have been perceived as discrete processes that can be separated from the context and studied as an isolated phenomenon. However, in a familiar work environment actors are immersed in the work context for extended periods; they know by heart the normal flow of activities and the action alternatives available. During familiar situations, therefore, knowledge-based, analytical reasoning and planning are replaced by a simple skill- and rule-based choice among familiar action alternatives, that is, on practice and know-how.

Instead, the problem is likely to be a combination of the following:

  • Ignoring trade-offs at the top. Major accidents happen all the time in corporations because often the immediate benefits of cutting corners are tangible, quantifiable and immediate, while the costs are longer-term, diffuse and less directly accountable. They will be someone else's problem. The result is longer-term, more important goals get ignored in practice. Indeed, to define something as a technical problem or set strict metrics often embeds ignoring a set of trade-offs. So people never think about it and don't see problems coming.
  • Trade-offs can also happen because general orders come from the top – make it better, faster, cheaper and also cut costs – and reality has to be confronted lower down the line, without formally acknowledging choices have to be made. Subordinates have to break the formal rules to make it work. Violating policies in some way is a de facto requirement to keep your job, and then it is deemed “human error” when something goes wrong. The top decision-maker perhaps didn't formally order a deviation: but he made it inevitable. The system migrates to the boundaries of acceptable performance as lots of local, contextual decisions and non-decisions accumulate.
  • People make faulty assumptions, usually without realizing it. For example, did anyone think through how easy it to conduct independent on-the-road tests? That was a critical assumption on whether they would be found out.
  • If problems occur, it can become taboo to even mention them, particularly when bosses are implicated. Organizations are extremely good at not discussing things and avoiding clearly obvious contrary information. People lack the courage to speak up. There is no feedback loop.
  • Finally, if things do wrong, leaders have a tendency to escalate, to go for double -or-quits. And lose.

There scarcely seems to be a profession or industry or country without problems like this. The Pope was just in New York apologizing for years of Church neglect of the child abuse problem, for example.

But that does not mean that people are not culpable and accountable and liable for things they should have seen and dealt with. Nor is it confined to ethics or regulation. It is also a matter of seeing opportunity. You should see things. But how? That's what I'm interested in.

It's essential for organizational survival to confront these problems of misperceptions and myopia. They're system problems. And they are everywhere. Who blows up next?

Unlearn what you know, or go extinct


People can show remarkable dexterity (or self-deception) at deferring blame when a situation goes badly wrong, like a company collapse, or a foreign policy crisis. Or the FBI knocking on your door, asking for hard drives with top secret e-mails on them. How could someone have foreseen it? It was business-as-usual, everyone did it, it was tried and tested. The problem was impossible to see and therefore no-one is to blame. Or just bad luck.

Unfortunately, that's almost never true.

In every crisis we studied, the top managers received accurate warnings and diagnoses from some of their subordinates, but they paid no attention to them. Indeed, they sometimes laughed at them.

That’s the conclusion from one of the classic studies of organizational failures, Nystrom & Starbuck in 1984. Some people in a company generally always see problems coming (we’ve seen other research about “predictable surprises” here). But senior managers find it extremely difficult to “unlearn” parts of what they know.

Organizations succumb to crises largely because their top managers, bolstered by recollections of past successes, live in worlds circumscribed by their cognitive structures. Top managers misperceive events and rationalize their organizations’ failures. .. Because top managers adamantly cling to their beliefs and perceptions, few turnaround options exist. And because organizations first respond to crises with superficial remedies and delays, they later must take severe actions to escape demise.

Instead, the researchers say, managers try to “weather the storm” by tightening budgets, cutting wages, introducing new metrics or redoubling efforts on what has worked before. They typically waste time, and defer choices. In the meantime, the firm filters out contrary evidence, and often gets even more entrenched in its ways. This is normal corporate life.

… well-meaning colleagues and subordinates normally distort or silence warnings or dissents. .. Moreover, research shows that people (including top managers) tend to ignore warnings of trouble and interpret nearly all messages as confirming the rightness of their beliefs. They blame dissents in ignorance or bad intentions – the dissenting subordinates or outsiders lack a top managers perspective, or they’re just promoting their self-interests, or they’re the kind of people wo would bellyache about almost anything. Quite often, dissenters and bearers of ill tidings are forced to leave organizations or they quit in disgust, thus ending the dissonance.

And then one morning it turns out it’s too late, and there is no more time.

The only solution that reliably works, Nystrom and Starbuck say, is to fire the whole top management team if there are signs of a crisis. All of them.

But top managers show an understandable lack of enthusiasm for the idea that organizations have to replace their top managers en masse in order to escape from serious crises. This reluctance partially explains why so few organizations survive crises.

The only real hope is to adapt before you have to. But the much more likely outcome is senior decision-makers end up eliminated, and destroy their companies and their company towns and employees and stakeholders along the way.

Just think about what might fix this. It isn’t more information or big data , as it will probably be ignored or discounted. It isn’t forecasts or technical reports or new budgets or additional sales effort. It isn’t better or more rigorous theory, or forcing the troops to work harder.

It’s a matter of focusing on and looking for signs about how people change their minds. It’s about figuring out what might count as contrary evidence in advance, and sticking to it. If you’re a senior decision-maker, this might be the only thing that saves you, before some outside investor or opponent decides the only hope is to wipe the slate clean, including you. If you figure out you need it in time. Will you?