Home/Inertia

Noticing the “predictable surprises”

If there’s one deep lesson I’ve learned from years delving into policy and decision-making, it’s that the biggest surprises are hidden in plain sight. Of course, there are genuine (temporary) secrets out there, and many people try to make a living from issuing spurious predictions about the future. But the things which most move the needle are a matter of noticing transparent, open things in the present, not looking for scoops or trying to foretell the future. It’s a matter of actually listening to what you are hearing.

Ironically, by avoiding making mistakes about the current state of things you will almost certainly anticipate the future better anyway,  because you won’t be fooling yourself about the situation.

Here’s another way to look at the issue. Max Bazerman teaches decision-making at Harvard. He and a co-author wrote a book named Predictable Surprises: The Disasters You Should Have Seen Coming, and How to Prevent Them. A predictable surprise, they say, is “an event or set of events that take an individual or group by surprise, despite prior awareness of all the information necessary to predict the events and their consequences.”

Take the 9/11 attacks, they say. Granted, it was hard to predict the particular hijackers would attack particular targets on a specific date. But there had been ever increasing data that showed airline security was a deepening problem for over ten years . And little or nothing was done about it. People had been warning about conflicts of interest in accounting for a decade before Enron and Arthur Anderson melted down, but preventative action was avoided.  The same thing happens almost every day in one corporation or government department or another.

The key traits, they say are;

  • leaders knew a problem existed and would not solve itself.
  • a bad outcome was almost inevitable because organizational members knew the problem was getting worse over time;
  • but they also knew that fixing the problem would incur costs in the present, while benefits of taking action would be delayed.
  • politicians and leaders know that shareholders or constituents will notice the immediate cost. But the leaders also suspect they will get little reward for avoiding a much worse disaster that is ambiguous and distant – so they “often cross their fingers and hope for the best.”
  • In any case, people typically like to maintain the status quo. If there is no stark crisis, we tend to keep doing things the way we have always done them. “Acting to avoid a predictable surprise requires a decision to act against this bias and to change the status quo. By contrast, most organizations change incrementally, preferring short-term fixes to long-term solutions.”
  • And  usually a “small vocal minority benefits from inaction” and blocks action for their own private benefit, even when the organization is desperate for a solution.

The result is: to an astonishing extent people don’t take action based on what they know, or deny contrary information altogether. It’s one of the many reasons why organizations often get into trouble. It’s one reason why in so many crises we don’t need more information or intelligence. We need to act on what we have.

What can be done about it? The authors offer a recognition-prioritization-mobilization sequence of steps to deal with it.

But it starts with recognition.

Positive illusions, self-serving biases, and the tendency to discount the future may prevent people from acknowledging that a problem is emerging. If their state of denial is strong enough, they may not even “see” the storm clouds gathering.

Who can say they haven’t seen examples of this? How many organizations take steps to guard against it?

Without recognition, nothing else works. You solve the wrong problems. You forecast the wrong things. That is the primary challenge that organizations face. It’s the assumptions people make that are the prime cause of trouble.  There are so many forces which work in all organizations to distort or reduce recognition. You need to be aware of it, and take active steps to deal with it – or meekly wait for the predictable surprise to find you.

 

 

2017-05-11T17:32:42+00:00 February 1, 2015|Assumptions, Decisions, Inertia, Organizational Culture and Learning|

How Politics can go Loopy

The midterm elections today will likely just produce the usual cyclical swing against the party in power.  The national debate has been particularly arid this year, largely focused on targeted messages to mobilize the base instead of changing people’s minds.

But much of the difference between people, and points of view, is not about the direct or immediate effects of particular policies, anyway. It’s not about immediate facts, or even always about immediate interests. According to Robert Jervis, in System Effects: Complexity in Political and Social Life,

At the heart of many arguments lie differing beliefs – or intuitions- about the feedbacks that are operating. (my bold)

It’s because, as saw before, most people find it very hard to think in systems terms. Politicians are aware of indirect effects, to be sure, and often present that awareness as subtlety or nuance. But they usually seize on one particular story about one particular indirect feedback loop, instead of recognizing that in any complex system there are multiple positive and negative loops. Some of those loops improve a situation. Some make it worse, or offset other effects. Feedback effects operate on different timescales and different channels. Any particular decision is likely to have both positive and negative effects.

The question is not whether one particular story is plausible, but how you net it all together.

Take the example of Ebola again. The core of the administration case was that instituting stricter visa controls or quarantine in the US might have the indirect effect of making it harder to send personnel and supplies to Africa, and containing the disease in Africa was essential.

That is likely true. It is  story which seems coherent and plausible. But there is generally no attempt to identify, let alone quantify or measure other loops which might operate as well, including ones with a longer lag time. Airlines may stop flying to West Africa in any case if their crew fear infection, for example. Reducing the chance of transmission outside West Africa might enable greater focus of resources or experienced personnel on the region. More mistakes in handling US cases (as apparently happened in Dallas) might significantly undermine public trust in medical and political authorities. You can imagine many other potential indirect effects.

The underlying point is this: simply identifying one narrative, one loop is usually incomplete.

Here’s another example, at the expense of conservatives this time. Much US foreign policy debate effectively revolves around “domino theory”, and infamously so in the case of the Vietnam war.  The argument from hawks in the 1960s was that if South Vietnam fell, other developing countries would also fall like dominoes. So even though Vietnam was in itself of little or no strategic interest or value to the United States, it was nonetheless essential to fight communism in jungles near the Laos border –  or before long one would be fighting communism in San Francisco. Jervis again:

More central to international politics is the dispute between balance of power and the domino theory: whether (or under what conditions) states balance against a threat rather than climbing on the bandwagon of the stronger and growing side.

You can tell a story either way: a narrative about positive feedback (one victory propels your enemy to even more aggression) or balancing feedback (enemies become overconfident and overstretch, provoke coalitions against them, alienate allies and supporters, or if we act forcefully it will produce rage and desperation and become a “recruiting agent for terrorism”.)

The same applies to the current state of the Middle East, where I have a lively debate going with some conservative friends who believe that the US should commit massive ground forces to contain ISIS in the Middle East, or “small wars will turn into  big wars.”  It’s in essence a belief that positive feedback will dominate negative/balancing feedback, domino-style.

But you can’t just assume such a narrative will play out in reality. South Vietnam did fall, after all But what happened was that the Soviet Union ended up overreaching in adventures like the invasion of Afghanistan. The other side collapsed.

The lure of a particular narrative, of focusing on one loop in a system, is almost overwhelming for many people, however. It’s related to the tendency to seize on one obvious alternative in decisions, with limited or no search for better or more complete or relevant alternatives.

The answer is not to just cherry pick particular narratives about feedback loops and indirect effects which happen to correspond with your prior preferences. That usually turns into wishful thinking and confirmation bias. Instead, you need to get a feel for the system as a whole, and have a way to observe and measure and test all (or most of ) the loops in operation.

Inflation expectations are vastly important – and vastly misunderstood

 

The collapse in European bond yields has been truly historic this year, with German 10-year bunds now hovering around 0.9%.  Danger lights are flashing. There are obvious explanations: above all, growing deflation fears,  as well as faltering economic data  and Draghi’s comments last week about fiscal support and QE.  Add to that  some safe-haven related flows because of fighting in Ukraine.  The ECB is now in full alarm mode because inflation expectations are dropping rapidly.

The terrifying thing about deflation is that expectations of falling prices can feed on themselves and become self-fulfilling, creating a chain reaction of deep problems in a modern economy.. The question is what can policymakers do about it.

The dirty secret about modern central banking is  that monetary theorists understand very little about the process of expectation formation. That is why so much policy debate drifts into irrelevance.

Economic policymakers usually turn it into a debate about credibility, stemming from the Kydland-Prescott academic tradition which focuses on time consistency and credible commitments. It is often rational to break previous commitments, so why should anyone believe current promises?  It also gets linked to another somewhat stale debate about “adaptive” versus “rational” expectations. For example,  much of the difference in opinion on the FOMC come down in practice  to disagreement about how forward-looking rather than backward-looking consumers are when it comes to inflation expectations. Do consumers and businesses just observe the recent trend, or anticipate problems before they arrive?

The amount of actual empirical work on the matter in all this is negligible, however. It is mostly prescriptive theory rather than descriptive or experimental work. And thinking generally about credibility in policy debates tends to be sloppy, with dozens of traps.  One major lesson is credibility is heavily dependent on the context, not something you can apply to any situation.

The importance of expectations has, however,  led to much more emphasis on policy communication in the last few years, as a matter of practical necessity (and desperation). Monetary policy has become more like theater than math or engineering. How can you sound more credible? How can you make statements believable?  How can you get people to understand your approach?  Hence Yellen’s endless communication committee work on the FOMC before taking the top job.

But the deeper truth is academic economists just don’t have the skills or tools to understand much about communication, because of course  it falls into psychology and organizational science and rhetoric and persuasion instead.  Parsimonious mathematical models are not adequate guides in these realms. And people can reasonably doubt whether policymakers have the skill and capability to deliver, whatever their intentions may be.

Instead, it comes down to asking why people change their minds. That is my main focus in policy issues.  Everyone knows from their own experience that attitude change is often a drawn-out, fraught, conflicted process. People often see only what they want to see for long periods of time. They can be influenced by networks and relationships and trust, by familiarity  and the salience of issues within their larger sphere. They observe facts, but can explain them away or ignore them. (Watch any tv political debate.) There are long time lags and considerable inertia. And many people never change their beliefs at all, regardless of the evidence.

It is also a classic stock-and-flow systems problem. Inflation expectations in particular  are usually very sticky, and take a long time to change.  Think of a bathtub: it potentially takes a lot of drip-drip information (flow)  to change the amount of water in the bathtub (stock), but the system can also suddenly change abruptly  (the bathtub overflows.)   People frequently forget that many policy issues have major stocks -i.e  bathtubs, sinks, buffers – contained within them and so do not react in a linear way to marginal change. There are complex positive and negative feedback loops, and decisive events can change things rapidly. Expectations aren’t simply “adaptive” or “rational” but complex.

Policy tools like fiscal policy and QE most likely do not make much difference to consumer expectations, certainly in the short term. Just ask the Japanese how successful QE and massive fiscal spending has been in putting their economy back on a sustained growth path.

Because there is so much inertia in inflation expectations, it’s more likely that after a few months European expectations will drift back towards 2% again, and the ECB will claim the credit for something they had little to do with. But if inflation expectations really  are becoming destabilized, it could take five or ten years and vast pain to fix the problem.

System Blindness

Good news: GDP grew at 4% and the winter surprise has faded. As usual, there is endless analysis available for free. These days we swim in a bottomless ocean of economic commentary.

Let’s turn to something that might give people an edge in making decisions instead.  One of the main reasons people and companies get into trouble is that they don’t think in terms of systems. I noted one major source of this approach was Jay Forrester’s work at MIT beginning in the 1960s. His successor at MIT is John Sterman, who calls it system blindness.

Sterman documents the multiple problems that decision-makers  have  dealing with dynamic complexity in his best-known shorter paper. We haven’t evolved to deal with complex systems, he says. Instead, we are much quicker to deal with things with obvious, direct, immediate, local causes. (See bear. Run.)

So people have inherent deep difficulties with feedback loops, for example.

Like organisms, social systems contain intricate networks of feedback processes, both self-reinforcing (positive) and self-correcting (negative) loops. However, studies show that people recognize few feedbacks; rather, people usually think in short, causal chains, tend to assume each effect has a single cause, and often cease their search for explanations when the first sufficient cause is found. Failure to focus on feedback in policy design has critical consequences.

As a result, policies and decisions often become actively counterproductive, producing unexpected side-effects and counter-reactions. Such ‘policy resistance’ means major decisions frequently have the opposite effect to that intended (such as building major roads but producing even more congestion, or suppressing forest fires and producing much bigger blazes.)

People also have serious problems understanding time and delays, which often leads to oversteer at the wrong times and wild oscillation and swings.  They have difficulty with short-term actions that produce long-term effects. They assume causes must be proportionate to effects. (Think of the long and variable lags in monetary policy, and the tendency to oversteer.)

Decision-makers have problems with  stocks and flows. In essence, a stock is the water in the bath. A flow is the water running from the tap.

People have poor intuitive understanding of the process of accumulation. Most people assume that system inputs and outputs are correlated (e.g., the higher the federal budget deficit, the greater the national debt will be). However, stocks integrate (accumulate) their net inflows. A stock rises even as its net inflow falls, as long as the net inflow is positive: the national debt rises even as the deficit falls—debt falls only when the government runs a surplus; the number of people living with HIV continues to rise even as incidence falls—prevalence falls only when infection falls below mortality. Poor understanding of accumulation has significant consequences for public health and economic welfare.

People also fail to learn from experience, especially in groups. They don’t test beliefs. Instead, they see what they believe, and believe what they see. They use defensive routines to save face. They avoid testing their beliefs, especially in public.

Note that these are not the problems that are getting prime attention in behavioral economics., let alone mainstream economics. Why don’t system ideas get more attention? Sterman notes that more generally, people often fail to learn from hard evidence.

More than 2 and one-half centuries passed from the first demonstration that citrus fruits prevent scurvy until citrus use was mandated in the British merchant marine, despite the importance of the problem and unambiguous evidence supplied by controlled experiments.

For me, one additional major reason might be we are generally so used to the analytic approach: i.e.  break things down into their component parts and examine each separately. This has worked extremely well for decades in science and business, when applied to things which don’t change and adapt all the time. Instead, systems thinking is about looking at the interaction between elements. It is synthesis, “joining the dots”, putting the pieces together and seeing how they work and interrelate in practice.

And that might be an additional explanation for the hedgehog versus fox distinction. You recall the fundamentally important research that finds that foxes, “who know many things”, outperform hedgehogs “who know one big thing”  at prediction and decision. Hedgehogs are drawn more to analysis and universal explanation; foxes are drawn more to synthesis and observation.

As a result, hedgehogs have much greater difficulty with system thinking. Foxes are more likely to recognize and deal with system effects. If you confront a complex adaptive system (like the economy or financial markets,) that gives foxes an edge.