Home/Risk Management

Risk Management

Volkswagen: What were they thinking?

It may turn into one of the most spectacular corporate disasters in history. What were Volkswagen thinking? Even after it became apparent that outsiders had noticed a discrepancy in emissions performance in on-the-road tests, the company still kept stonewalling and continued to sell cars with the shady software routines.

We won't know the murky, pathological details for a while. But understanding how this happens is urgent. If you ignore this kind of insidious problem, billion-dollar losses and criminal prosecutions can occur.

In fact, it's usually not just one or two “bad apples,” unethical criminals who actively choose stupid courses of action, although it often suits politicians and media to believe so. It's a system phenomenon, according to some of the classic studies (often Scandanavian) like Rasmussen and Svedung.

.. court reports from several accidents such as Bhopal, Flixborough, Zeebrügge, and Chernobyl demonstrate that they have not been caused by a coincidence of independent failures and human errors. They were the effects of a systematic migration of organizational behavior toward accident under the influence of pressure toward cost-effectiveness in an aggressive, competitive environment.

It's not likely anyone formally sat down and did an expected utility calculation, weighting financial and other benefits from installing cheat software, versus chances of being found out times consequent losses. So the usual way of thinking formally about decisions doesn't easily apply.

It's much more likely that it didn't occur to anyone in the company to step back and think it through. They didn't see the full dimensions of the problem. They denied there was a problem. They had blind spots.

It can often be hard to even find any point at which decisions were formally made. They just … happen. Rasmussen & co again:

In traditional decision research ‘decisions’ have been perceived as discrete processes that can be separated from the context and studied as an isolated phenomenon. However, in a familiar work environment actors are immersed in the work context for extended periods; they know by heart the normal flow of activities and the action alternatives available. During familiar situations, therefore, knowledge-based, analytical reasoning and planning are replaced by a simple skill- and rule-based choice among familiar action alternatives, that is, on practice and know-how.

Instead, the problem is likely to be a combination of the following:

  • Ignoring trade-offs at the top. Major accidents happen all the time in corporations because often the immediate benefits of cutting corners are tangible, quantifiable and immediate, while the costs are longer-term, diffuse and less directly accountable. They will be someone else's problem. The result is longer-term, more important goals get ignored in practice. Indeed, to define something as a technical problem or set strict metrics often embeds ignoring a set of trade-offs. So people never think about it and don't see problems coming.
  • Trade-offs can also happen because general orders come from the top – make it better, faster, cheaper and also cut costs – and reality has to be confronted lower down the line, without formally acknowledging choices have to be made. Subordinates have to break the formal rules to make it work. Violating policies in some way is a de facto requirement to keep your job, and then it is deemed “human error” when something goes wrong. The top decision-maker perhaps didn't formally order a deviation: but he made it inevitable. The system migrates to the boundaries of acceptable performance as lots of local, contextual decisions and non-decisions accumulate.
  • People make faulty assumptions, usually without realizing it. For example, did anyone think through how easy it to conduct independent on-the-road tests? That was a critical assumption on whether they would be found out.
  • If problems occur, it can become taboo to even mention them, particularly when bosses are implicated. Organizations are extremely good at not discussing things and avoiding clearly obvious contrary information. People lack the courage to speak up. There is no feedback loop.
  • Finally, if things do wrong, leaders have a tendency to escalate, to go for double -or-quits. And lose.

There scarcely seems to be a profession or industry or country without problems like this. The Pope was just in New York apologizing for years of Church neglect of the child abuse problem, for example.

But that does not mean that people are not culpable and accountable and liable for things they should have seen and dealt with. Nor is it confined to ethics or regulation. It is also a matter of seeing opportunity. You should see things. But how? That's what I'm interested in.

It's essential for organizational survival to confront these problems of misperceptions and myopia. They're system problems. And they are everywhere. Who blows up next?

Living on (Swiss) Vesuvius

You’ve probably seen those films about Pompeii that begin with bustling sunlit scenes of normal life, and end with ash clouds and panic and burning and suffocation.  The victims had become used to rumbling and smoke from the volcano for decades, and paid no attention to the mountain. Then an epic disaster overtook them and burnt them alive.

Yet people still live on the slopes of volcanoes. They ignore small risks of terrible outcomes , in a consistent pattern called disaster myopia. The slopes are often fertile and attractive. Vesuvius is now surrounded by far more people today than it ever was in Roman times.

In fact, recent discoveries  suggest the site of city of Naples itself has been buried under ten feet of ash in the past, in more ancient eruptions far worse than the infamous Roman disaster.  The trouble is that evacuating the Naples area would be logistically almost impossible. So Italian authorities largely ignore the possibility.

Disaster myopia also applies to economics and banking and finance, as researchers like  Guttentag and Herring have pointed out since the 1980s. People find it very difficult to handle small risks of serious problems, so often ignore them altogether.

That brings us to the events of this week, specifically the massive losses caused by the decision of the Swiss National Bank to abandon their peg against the euro. Only one out of 50 economists surveyed by Bloomberg was expecting a change in the peg – and the one who did expected a tiny move.  The Swiss  currency leapt 41% after the announcement and ended the day 19% higher.  If you measured risk the way so many in the markets do, using standard deviation and value at risk, that should not have happened in the lifetime of the universe. As Matt Levine of Bloomberg View points out,

An 180-standard-deviation daily move should happen once every … hmmm let’s see, Wikipedia gives up after seven standard deviations, but a 7-standard-deviation move should happen about once every 390 billion days, or about once in a billion years.  So this should be much less frequent.

Complacency produced the financial equivalent of Pompeii. Losses run into billions of dollars, including major banks, hedge funds, retail fx brokerages, and probably thousands of retail clients who were foolishly trading FX on margin have been wiped out.

That said, it’s one thing to ignore volcanoes which can be expected to erupt every four or five hundred years, or longer. Even if you live on a volcano, you have a very good chance of never seeing an eruption in your lifetime, and in most places you have a good chance of escaping if you do.  Events in complex systems like the economy and financial markets are much less predictable. We have, as Keynes pointed out about uncertainty in the 1930s, little or no idea of the odds of many important events.

Even the weather is only moderately uncertain. The sense in which I am using the term is that in which the prospect of a European war is uncertain, or the price of copper and the rate of interest twenty years hence, or the obsolescence of a new invention, or the position of pivate wealth-owners in the social system in 1970. About these matters there is no scientific basis on which to form any calculable probability whatsoever. We simply do not know.(QJE, February 1937)

The result, he says, is people fall back on conventional judgment and copy what others are doing. Or, as we see, they use forecasts and “information” sources in much the same way the Romans used sheep entrails to try to foretell the future, despite all the evidence that forecasters almost always miss critical turning points and mostly just produce foolish overconfidence in their clients.

In complex systems formal prediction is usually a futile hope.  It is stupid to make decisions on the basis of forecasts that are largely based on extrapolation of historical data and leave conventional assumptions unchallenged. Here’s some things you can do instead:

  • figure out what can go wrong with a decision, or position, or point of view. That means examining your assumptions, looking for boundary conditions, and thinking about what-if scenarios. And then develop markers, to monitor events. You’ll never be able to anticipate every eventuality, but you can have enough signals to make sure you are alert, and agile, and suspicious of complacency.
  • think about how you and the other side think about the situation. Step back and take a detached view of the game that is being played, and do some “second-level thinking“. I’ve worked in a central bank and have had years of working out how they make decisions from a market perspective.  There are ways to do better than the market, largely by avoiding errors most of the market typically makes. I wasn’t following Switzerland so can’t claim I had any special insight here. That said, you can get a much better understanding of how events like this take place by thinking about people’s actual reaction function, not what you think they “ought” to do.  Markets and central banks regularly misunderstand each other because they misread communication, misunderstand motivations and have structural incentives to say one thing and do another. Ignoring what people say and concentrating on how they actually think and do is the only way to have a chance of avoiding problems.
  • think about where your edge or advantage lies. If you don’t have one, don’t play the game in the first place. What possible advantage could a retail investor have playing FX markets at a leverage of 20 or more? It’s a classic case of trying to pick up pennies just in front of a steamroller. Understanding where your edge actually lies is the first step to   using it.
  • manage your exposure where you can’t make easy predictions, so that it reflects underlying uncertainty.  (This is a point Nicholas Taleb vociferously argues.)
  • don’t be naive. Anyone who thinks that the latest soothsayer or prophet is going to help them is naive. Only suckers pay for seers.  Financial market companies have wasted billions on futile attempts to forecast the future when they should pay more disciplined attention to the roots of their own views and assumptions.  You have to think about the underlying validity of opinions and forecasts, which is usually extremely low.
  • have a plan B. If you think about what can go wrong, you can have a plan to deal with it or turn it to your advantage. Instead of making extrapolations from the past, you need to think about how to react to various scenarios in the future.

I’ll come back to some of these in more detail. The most important thing is to develop markers that alert you to problems with your own assumptions. You have to look at mindsets and thinking patterns, not spurious “predictions” and forecasts.





2017-05-11T17:32:42+00:00 January 20, 2015|Assumptions, Central Banks, Communication, Current Events, Europe, Risk Management|

Markets are Complex Systems – but most people don’t get that

One of the most successful investors of recent times has been Howard Marks. I took a look at his book about markets here. You can’t outperform if you don’t think better.

Thus, your thinking has to be better than that of others—both more powerful and at a higher level. Since other investors may be smart, well-informed and highly computerized, you must find an edge they don’t have. You must think of something they haven’t thought of, see things they miss or bring insight they don’t possess. You have to react differently and behave differently. In short, being right may be a necessary condition for investment success, but it won’t be sufficient. You must be more right than others . . . which by definition means your thinking has to be different.

First-level thinking, he says, is just having an opinion or forecast about the future. Second-level thinking, on the other hand, takes into account expectations, and the range of outcomes, and how people will react when expectations turn out to be wrong. Second-level thinkers are “on the alert for instances of misperception.”

Here’s a parallel. Marks doesn’t put it this way, but in essence it’s a matter of seeing markets as a nonlinear adaptive system, in the sense I was talking about in the last post. Second-level thinking is systems thinking.  Instead of linear straight lines,  markets react in complex feedback loops which depend on the existing stock of perception ( i.e expectations). Some of the greatest market players have an instinctive feel for this. But because of the limits of the human mind when it comes to complex systems,  most people have a great deal of trouble understanding markets.

That includes many mainstream economists. One obvious reason is price and price changes are one of the most important feedback loops in markets, but not the only feedback loop. A deeper reason is that most academics tend to be hedgehogs, interested in universal explanatory theories and linear prediction and “one big thing.”  But complex systems frustrate and falsify universal theories, because they change. The dominant loop changes, or new loops or added, or new players or goals change the nature of the system.

There’s another implication if you have a more systems-thinking view of markets. Complex adaptive systems are not predictable in their behavior. This, to me,  is a deeper reason for the difficulty of beating the market than efficient market theory. It isn’t so much that markets are hyper-efficient information processors that instantaneously adjust, as the fact they are complex. So consistent accurate prediction of their future state is impossible. It isn’t so much that markers are clearly mysteriously prone to statistically improbable 100- or 1000-year risks happening every 10 years. It’s that markets evolve and change, and positive feedback loops can take them into extreme territory with breathtaking speed that makes their behavior stray far from norms and equilibria.

“Tail Risks” are not the far end of a probability distribution,  as standard finance theory and policy thinking believes. They are positive feedback loops: cascades of events feed back on each other and change the behavior of the underlying system.  It’s not variance and volatility and fat-tailed distributions, but a matter of stocks and flows and feedback,  and tipping points which shift the dominant loop, and the underlying structure and changing relationship between components.

This view also helps understand why markets and policy resist change and stay in narrow stable ranges for long periods. Balancing feedback loops tend to kick in before long, producing resistance and inertia and cycles and pendulums, and making “this time it’s different” claims frequently a ticket to poverty.  Delays and time effects and variable  lags and cumulative effects matter profoundly in a way that simply doesn’t show up in linear models. Differential survival means evolutionary selection kicks in, changing behavior.

How can you make money if you can’t predict the future in complex systems, then? It’s clearly possible. Marks is a dazzlingly successful investor whose core belief is to be deeply skeptical of people who think they can make accurate predictions.

Awareness of the limited extent of our foreknowledge is an essential component of my approach to investing. I’m firmly convinced that (a) it’s hard to know what the macro future holds and (b) few people possess superior knowledge of these matters that can regularly be turned into an investing advantage.

You might be able to  know more than others about a single company or security, he says. And you can figure out where we might be in a particular cycle or pendulum. But broad economic forecasts and predictions are essentially worthless. Most forecasting is just extrapolation of recent data or events, and so tends to miss the big changes that would actually help people make money..

One key question investors have to answer is whether they view the future as knowable or unknowable. Investors who feel they know what the future holds will act assertively: making directional bets, concentrating positions, levering holdings and counting on future growth—in other words, doing things that in the absence of foreknowledge would increase risk. On the other hand, those who feel they don’t know what the future holds will act quite differently: diversifying, hedging, levering less (or not at all), emphasizing value today over growth tomorrow, staying high in the capital structure, and generally girding for a variety of possible outcomes.

In other words, a belief in prediction tends to go with a belief in making overconfident, aggressive big bets, sometimes being lucky –  and then flaming out. The answer? Above all, control your risks, Marks says. Markets are a “loser’s game”, like amateur tennis. It’s extremely hard to hit winners. Instead, avoid hitting losers. Make sure you have defense as as well as offense.

Offense is easy to define. It’s the adoption of aggressive tactics and elevated risk in the pursuit of above-average gains. But what’s defense? Rather than doing the right thing, the defensive investor’s main emphasis is on not doing the wrong thing.

Thinking about what can go wrong is not purely negative, however. It’s not a matter of being obsessed with biases. Instead, it’s a way to be more creative and agile in adapting to change. If markets are complex systems, the key, as Herbert Simon puts it, is not prediction but “robust adaptive procedures.”

To stress the point again – people don’t intuitively understand systems. And many of our analytics and standard theories get them even less.  But it’s the way markets and policy work.


2017-05-11T17:32:43+00:00 August 9, 2014|Decisions, Human Error, Investment, Market Behavior, Perception, Risk Management|

Who gave the order to shoot down a civil airliner?

The loss of flight MH17 over Ukraine, with debris, bodies and dead children's stuffed animals strewn over the remote steppe, is unspeakably tragic. Major Western countries are being swift to accuse and condemn Russian rebel groups, and by extension Putin, of a repugnant crime.

It's unlikely, however, that someone identified a Malaysian airliner overhead and deliberately chose to shoot it down. It's more probable Russian rebels didn't have the skill or backup to know they were firing at a civilian airliner.

It might not change the moral blame attached to the incident. At best it would be awful negligence. It might not affect the desire to hold leaders accountable.

But it ought to make people stop and think about how decisions get made as well. The near-automatic default in most public and market discussion is to think in rational actor terms. Someone weighed the costs and benefits of alternatives. They chose to shoot down the airliner. So find the person who made that horrible choice.

So how do you deal with a world in which that doesn't happen most of the time? Where people shoot down airliners without intending to? When the financial system crashes, or recessions happen, or the Fed finds it hard to communicate with the market? Where people ignore major alternatives, or use faulty theories and data? When they fail to grasp the situation and fail to anticipate side-effects?

There's actually a deeper and more important answer to these questions.

Who was to blame for Challenger?

Let's go back to the example of the Challenger Shuttle Disaster I mentioned in the last post, because it's one of the most classic studies of failed decisions in recent times. Here was an organization – NASA – which was clearly vastly more skilled, disciplined and experienced than Russian rebels. But they still encountered a catastrophic misjudgment and failure. Seven crew died. Who was to blame?

The initial public explanation of the shuttle disaster, according to the author Diane Vaughan, was middle management in NASA deliberately chose to run the risk in order to keep to a launch schedule. Like so many corporations, production pressure meant safety was ignored. Management broke rules and failed to pass crucial information to higher levels.

In fact, after trawling through thousands of pages later released to the National Archives and interviewing hundreds of people, she concluded that no-one specifcally broke the rules or did anything they considered wrong at the time.

On the one hand, this is good news – genuinely amoral, stupid, malevolent people may be rarer than you'd think from reading the press. In another way, though, it is actually much more frightening.

NASA, after all, were the original rocket scientists – dazzlingly able people who had sent Apollo to the moon some years before. NASA engineers understood the physical issues they were dealing with far better than we are ever likely to be able to understand the economy or market behavior.

NASA had exceptionally thorough procedures and documentation. They made extensive effrots to share information. They were rigorous and quantitative. In fact, ironically the latter was part of the problem, because observational data and photographic evidence about penetration of the O-ring seal was discounted as too tacit and vague.

So what was the underlying explanation of the catastrophe? It wasn't simply a technical mistake.

Possibly the most significant lesson from the Challenger case is how environmental and organizational contingencies create prerational forces that shape worldview, normalizing signals of potential danger, resulting in mistakes with harmful human consequences. The explanation of the Challenger launch is a story of how people who worked together developed patterns that blinded them to the consequences of their actions. It is not only about the development of norms but about the incremental expansion of normative boundaries: how small changes – new behaviors that were slight deviations from the normal course of events – gradually became the norm, providing a basis for accepting additonal deviance. p409

Conformity to norms, precedent, organizational structure and environmental conditions, she says,

congeal in a process that can create a change-resistant worldview that neutralizes deviant events, making them acceptable and non-deviant.

Organizations have an amazing ability to ignore signals something is wrong, including, she says, the history of US involvement in Vietnam.

The upshot? Often individuals and corporations do carry out stupid and shortsighted activities (often because they ignore trade-offs.) But more often they have an extraodinary ability to ignore contrary signals, especially if they accummulate slowly over time, and convince themselves they are doing the right thing.

People develop “patterns that blind them to the consequences of their actions” and develop change-resistant worldviews. That's why I look for blind spots, becuse research shows it is the key to understanding decisions and breakdowns. You can look for those patterns of behavior. One sign, for example, is the slow, incremental redefinition, normalization and acceptance of risk that Vaughan describes.

I'm going to look much more at systems in coming posts.



Two Kinds of Error (part 2)

Markets increasingly rely on quantitative techniques. When does formal analysis help make better decisions? In the last post I was talking about the difference between the errors produced by “analytic” and “intuitive” thinking in the context of the collapse of formal planning in corporate America.  Those terms can be misleading, however, because it implies it is somehow all a matter of rational technique versus “gut feel.”

Here’s another diagram, from near the beginning of James Reason’s classic analysis, Human Error (an extremely important book which I’ll come back to another time.) Two marksmen aim at a target, but the pattern of errors is very different.


JamesReason 1


A is the winner, based on raw scores. He is roughly centered, but dispersed and sloppy. B is much more consistent but off target.

This shows the difference between variable error (A), on the one hand, and constant or systematic error (B) on the other. B is probably the better marksman even though he lost, says Reason, because his sights could be misaligned or there could be an additional factor throwing him off. B’s error is more predictable, and potentially more fixable. But fixing it depends on the extent to which the reasons for the error are understood.

What else could cause B to be off? (Reason doesn’t discuss this in the context). In real life decisions (and real life war) the target is often moving, not static. That means errors like B makes are pervasive.

Let’s relate this back to one of the central problems for making decisions or relying on advice or expertise. Simple linear models make far better predictions than people in a vast number of situations.  This is the Meehl problem, which we’ve known about for fifty years. In most cases, reducing someone’s own expertise to a few numbers in a linear equation will predict outcomes much better than the person him- or her-self. Yes, reducing all your years of experience to three or four numerical variables and sticking them in a spreadsheet will mostly outperform your own best judgement. (It’s called ‘bootstrapping.’)

In fact, the record of expert prediction in economics and politics – the areas markets care about – is little better than chimps throwing darts. This is the Tetlock problem, which is inescapable for any research firm or hedge fund since he published his book in 2005.  Why pay big bucks to hire chimps?

But the use of algorithms in markets and formal planning in corporations has also produced catastrophe. It isn’t just the massive failure of many models during the global financial crisis. The most rigorously quant-based hedge funds are still  trailing the indices, and it seems like the advantage quant techniques afforded is actually becoming a vulnerability as so many firms use the same kind of VAR models.  So what’s the right answer?

Linear models perform better than people in many situations because they reduce or eliminate the variable error. Here’s how psychologist Johnathan Baron explains why simplistic models usually outperform the very person or judge they are based on,  in a chapter on Quantitative judgment in his classic text on decision-making, Thinking and Deciding:

Why does it happen? Basically, it happens because the judge cannot be consistent with his own policy .. He is unreliable, in that he is likely to judge the same case differently on different occasions (unless he recognizes the case the second time. As Goldberg (1970) puts it…. ‘If we could remove some of this human unreliability by eliminating the random error in his judgements, we should thereby increase the validity of the resulting predictions.’  (p406)

But algorithms and planning and formal methods find it much harder to deal with systematic error. This is why, traditionally, quantitative methods have been largely confined to lower and middle-management tasks like inventory or production scheduling or routine valuation.  Dealing with systematic error requires the ability to recognize and learn.

Formal optimization does extremely well in static, well-understood, repetitive situations. But once the time period lengthens, so change becomes more likely, and the complexity of the situation increases, formal techniques can produce massive systematic errors.  The kind that kill companies and careers.

What’s the upshot? It isn’t an argument against formal models. I’m not remotely opposed to quantitative techniques. But it is a very strong argument for looking at boundary conditions for the applicability of techniques to different problems and situations. It’s like the Tylenol test I proposed here: two pills cure a headache. Taking fifty pills at once will probably kill you.

It is also  a very strong argument for looking carefully at perception, recognition and reaction to evidence as the core of any attempt to find errors and blind spots. It is essential to have a way to identify and control systematic errors as well as variable errors.  Many companies try to have a human layer of judgment as a kind of check on the models for this reason. But that’s where all the deeper problems of decision-making like confirmation bias and culture rear their head. The only real way to deal with the problem is to have an outside firm which looks for those specific problems.

You can’t seek alpha or outperformance by eliminating variable error any more. That’s been done, as we speak, by a million computers running algorithms. Markets are very good at that. The only way to get extra value is to look for systematic errors.


2017-05-11T17:32:44+00:00 May 12, 2014|Adaptation, Decisions, Human Error, Lens Model, Quants and Models, Risk Management|

Two kinds of error (part 1)

How do we explain why rigorous, formal processes can be very successful in some cases, and disastrous in others? I was asking this in reference to Henry Mintzberg’s research on the disastrous performance of formal planning. Mintzberg cites earlier research on different kinds of errors in this chart (from Mintzberg, 1994, p327).

Mintzberg Diagram


.. the analytic approach to problem solving produced the precise answer more often, but its distribution of errors was quite large. Intuition, in contrast, was less frequently precise but more consistently close. In other words, informally, people get certain kinds of problems more or less right, while formally , their errors, however infrequent, can be bizarre.

This is important, because it lies underneath a similar distinction that can be found in many other places. And because the field of decision-making research is so fragmented, the similar answers usually stand alone and isolated.

Consider, for example, how this relates to  Nicholas Nassim Taleb’s distinction between Fragile and Antifragile approaches and trading strategies. Think of exposure, he says, and the size and risk of the errors you may make.

A lot depends on whether you want to rigorously eliminate small errors, or watch out for really big errors.

“Strategies grow like weeds in a garden”. So do trades.

How much should you trust “gut feel” or “market instincts” when it comes to making decisions or trades or investments? How much should you make decisions through a rigorous, formal process using hard, quantified data instead? What can move the needle on performance?

In financial markets more mathematical approaches have been in the ascendant for the last twenty years, with older “gut feel” styles of trading increasingly left aside. Algorithms and linear models are much better at optimizing in specific situations than the most credentialed people are (as we’ve seen.) Since the 1940s business leaders have been content to have operational researchers (later known as quants) make decisions on things like inventory control or scheduling, or other well-defined problems.

But rigorous large-scale planning to make major decisions has generally turned out to be a disaster whenever it has been tried. It has generally been about as successful in large corporations as planning also turned out to be in the Soviet Union (for many of the same reasons). As one example, General Electric originated one of the main formal planning processes in the 1960s. The stock price then languished for a decade. One of the very first things Jack Welch did was to slash the planning process and planning staff.  Quantitative models (on the whole) performed extremely badly during the Great Financial Crisis. And hedge funds have increasing difficulty even matching market averages, let alone beating them.

What explains this? Why does careful modeling and rigor often work very well on the small scale, and catastrophically on large questions or longer runs of time? This obviously has massive application in financial markets as well, from understanding what “market instinct” is to seeing how central bank formal forecasting processes and risk management can fail.

Something has clearly been wrong with formalization. It may have worked wonders on the highly structured, repetitive tasks of the factory and clerical pool, but whatever that was got lost on its way to the executive suite.

I talked about Henry Mintzberg the other day. He pointed out that contrary to myth, most successful senior decision-makers are not rigorous or hyper-rational in planning, Quite the opposite. In the 1990s he wrote a book, The Rise and Fall of Strategic Planning, which tore into formal planning and strategic consulting (and where the quote above comes from.)

There were three huge problems, he said. First, planners assumed that analysis can provide synthesis or insight or creativity. Second, that hard quantitative data alone ought to be the heart of the planning process. Third, assuming the context for plans is stable, or predictable. All of them were just wrong. For example,

For data to be “hard” means that they can be documented unambiguously, which usually means that they have already been quantified. That way planners and managers can sit in their offices and be informed. No need to go out and meet the troops, or the customers, to find out how the products get bought or the wards get flight to what connects those strategies to that stock price; all that just wastes time.

The difficulty, he says, is that hard information is often limited in scope, “lacking richness and often failing to encompass important noneconomic and non-quantitiative factors.” Often hard information is too aggregated for effective use. It often arrives too late to be useful. And it is often surprisingly unreliable, concealing numerous biases and inaccuracies.

The hard data drive out the soft, while that holy ‘bottom line’ destroys people’s ability to think strategically. The Economist described this as “playing tennis by watching the scoreboard instead of the ball.” ..  Fed only abstractions, managers can construct nothing but hazy images, poorly focused snapshots that clarify nothing.

The performance of forecasting was also woeful, little better than the ancient Greek belief in the magic of the Delphic Oracle, and “done for superstitious reasons, and because of an obsession with control that becomes the illusion of control. ”

Of course, to create a new vision requires more than just soft data and commitment: it requires a mental capacity for synthesis, with imagination. Some managers simply lack these qualities – in our experience, often the very ones most inclined to rely on planning, as if the formal process will somehow make up for their own inadequacies. … Strategies grow initially like weeds in a garden: they are not cultivated like tomatoes in a hothouse.

Highly analytical approaches often suffered from “premature closure.”

.. the analyst tends to want to get on with the more structured step of evaluation alternatives and so tends to give scant attention to the less structured, more difficult, but generally more important step of diagnosing the issue and generating possible alternatives in the first place.

So what does strategy require?

We know that it must draw on all kinds of informational inputs, many of them non-quantifiable and accessible only to strategists who are connected to the details rather than detached from them. We know that the dynamics of the context have repeatedly defied any efforts to force the process into a predetermined schedule or onto a predetermined track. Strategies inevitably exhibit some emergent qualities, and even when largely deliberate, often appear less formally planned than informally visionary. And learning, in the form of fits and starts as well as discoveries based on serendipitous events and the recognition of unexcited patterns, inevitably plays a role, if not the key role in the development of all strategies that are novel. Accordingly, we know that the process requires insight, creativity and synthesis, the very thing that formalization discourages.

[my bold]

If all this is true (and there is plenty of evidence to back it up), what does it mean for formal analytic processes? How can it be reconciled with the claims of Meehl and Kahneman that statistical models hugely outperform human experts? I’ll look at that next.

How to go over budget by a billion dollars and other planning catastrophes

I was talking in the last post about base rate neglect. People have a tendency to seize on vivid particular features of a situation to the exclusion of general features. Journalists are particularly prone to this problem, because they have to find vivid details to sell newspapers.

Let’s take a (literally) concrete example: infrastructure spending.  Over $2 trillion has been added to extra infrastructure stimulus spending since the global crisis broke out in 2008, including massive fiscal injections in the US, China and India. So if there are widespread decision errors even as large as just 1% or 5% in this area, the consequences easily run into tens of billions of dollars.

In fact, infrastructure planners massively underestimate the cost of major projects, often by more than 50%. The average cost overrun on major rail projects is 44%, according to recent research:

The dataset .. shows cost overrun in 258 projects in 20 nations on five continents. All projects for which data were obtainable were included in the study. For rail, average cost overrun is 44.7 per cent measured in constant prices from the build decision. For bridges and tunnels, the equivalent figure is 33.8 per cent, and for roads 20.4 per cent. ..

  • nine out of 10 projects have cost overrun;
  • overrun is found across the 20 nations and five continents covered by the study;
  • overrun is constant for the 70-year period covered by the study; cost estimates have not improved over time.

And planners  hugely overestimate the benefits of major projects.

For rail, actual passenger traffic is 51.4 per cent lower than estimated traffic on average. This is equivalent to an average overestimate in rail passenger forecasts of no less than 105.6 per cent. ..

The following observations hold for traffic demand forecasts:

  • 84 per cent of rail passenger forecasts are wrong by more than ±20 per cent
  • nine out of 10 rail projects have overestimated traffic;
  •  50 per cent of road traffic forecasts are wrong by more than ±20 per cent;

The data is from a 2009 paper in the Oxford Review of Economic Policy by Bent Flyvberg: Survival of the unfittest: why the worst infrastructure gets built—and what we can do about it.

Just consider for a moment  what the data mean for the way policy decisions are taken. These are some of the biggest decisions politicians and planners make, and they are systemically biased and inaccurate.  Massive cost overruns are the norm.

The outcome is not just billions of dollars wasted, but, as Flyvbjerg argues, the worst projects with the most serious judgmental errors are the ones that tend to get built. This is pervasive in almost every country in the world.

Promoters overestimate benefits and underestimate costs in order to get approval. Of course they do, you might say. People are often overconfident and oversell things. But this does not get taken into account in the decisions. Recall again: “overrun is constant for the 70-year period covered by the study; cost estimates have not improved over time.”

What can be done about it? According to Flyvbjerg,

If project managers genuinely consider it important to get estimates of costs, benefits, and risks right, it is recommended they use a promising new method called ‘reference class forecasting’ to reduce inaccuracy and bias. This method was originally developed to compensate for the type of cognitive bias in human forecasting that Princeton psychologist Daniel Kahneman found in his Nobel prize-winning work on bias in economic forecasting (Kahneman, 1994; Kahneman and Tversky, 1979). Reference class forecasting has proven more accurate than conventional forecasting. It was used in project management in practice for the first time in 2004 (Flyvbjerg and COWI, 2004); in 2005 the method was officially  endorsed by the American Planning Association (2005); and since then it has been used by governments and private companies in the UK, the Netherlands, Denmark, Switzerland, Australia, and South Africa, among others.

I’ve talked about Kahneman’s outside view before.  As you can see, it’s just another way of saying: don’t ignore the base rate. Adjust your estimates with reference to the general  class of things you are looking at, not just the specific particulars of a situation. It’s seldom easy: another way to describe this is “the planning fallacy” and it applies to just about any major project (including starting a company!).

Here’s what to take away from this: this one particular blind spot is one of the most serious errors in public policy decision-making, with billions of dollars wasted every year.

But it’s not a matter of having to be pinpoint accurate about future states of the world, or getting a license for prophecy. Instead, it’s a matter of observing how people typically behave and what generally happens already. Observation matters more than speculation about the future.

We need less forecasting, and more recognition. That’s what Alucidate is set up to do.

2014-01-07T09:20:03+00:00 January 7, 2014|Base Rates, Decisions, Expertise, Outside View, Risk Management, Uncategorized|

When “what you want it to be” turns into what you regret

It usually isn’t information or data that trips people up. It is most often what they want to see in the data. The more senior you are and the more responsibility you have, the more it is a potential risk, because you are increasingly paid to deal with ambiguity and uncertainty. Junior staff follow tightly defined tasks. Senior leaders don’t have that luxury.

Sam Savage of Stanford University  is one of the leading thinkers about how to deal with probability and probability distributions in business.  His book The Flaw of Averages: Why We Underestimate Risk in the Face of Uncertainty examines how people make mistakes by looking at simple averages rather than probability distributions.  (It is well worth reading.)

The Enron scandal prompted him to write a paper helpfully titled Some Gratuitously Inflammatory Remarks on the Accounting Industry.

The Enron debacle reminds me of an old joke. Three accountants interview for a job. When asked to evaluate 2 plus 2, the first responds “four,” and the second, thinking it’s a trick ques- tion, says “five.” But the third, whom they hire on the spot, says: “what do you want it to be?” I wasn’t laughing, however, when I learned that the behavior of candidate three lies within generally accepted accounting practice.

It is easy to understand why people prefer being asked “what do you want it to be.” It makes life easier . . . . until you blow apart in a massive corporate bankruptcy and scandal, like Enron or Arthur Anderson. Bernie Madoff’s investors also learned the hard way about things which were too good to be true. Seeing only  what you want it to be is the best way to end up with things you regret.

It is another variety of  confirmation bias , the worst bane of almost all senior decision-makers.  Everyone knows people who prefer to ignore or selectively use evidence in business. And everyone knows they and their companies generally don’t last long.

That is why senior people require a way to test their perceptions, not just raw data. How you think about data and information matters.  What you see in a situation is critical. Looking for small ways in which your viewpoint may be wrong by comparing it to other views is the best way to get the big things right. It is also the best way to notice misperceptions in other people and companies, which presents opportunities.

2017-05-11T17:32:49+00:00 December 15, 2013|Confirmation bias, Decisions, Perception, Risk Management|

How organizational culture creates dangerous – and hidden – assumptions

The problems which damage companies and people the most are usually the ones they just don’t see coming.

That doesn’t mean those problems are extremely rare or unusual, the equivalent of once in a hundred year crises. It does mean they lie at the very far end of the tail risk in a probability distribution.  It’s not necessarily the equivalent of an asteroid strike out of nowhere.

In fact, the ordinary surroundings of organizational life are enough to stop you seeing critical developments. The biggest problems are often hidden right in plain view. They are very often the assumptions it simply doesn’t occur to you to question.

I said in this recent post that blind spots are often a matter of culture, not just biases or individual misjudgments of probability.  Edgar Schein wrote one of the most important books on corporate culture. Groups develop shared assumptions about more abstract, deeper issues, he says.

One example is assumptions about time in a culture. According to Schein,

.. one of the reasons why sales and R&D people have trouble communicating is that they work with totally different time horizons. .. If we now consider the communication process between the researcher and the salesperson/marketer, when the latter says that she wants a product “soon” and the researcher agrees the product will be ready “soon,” they may be talking about completely different things and not realize it. ..

Time horizons differ not only by function and occupation but by rank. The higher the rank, the longer the time horizon over which a manager has discretion.

This is a constant problem for markets, too.  Mutual funds have very different time horizons to rapid-trading hedge funds, or brokers feeding off the latest headline. Both live in an almost completely different world of time horizons from the ECB or Fed or Congress to G7, let alone journalists filing copy for the next edition.  It is one more reason why markets often simply do not see or hear the message policymakers try to convey.

A corporate leader is paid to think further ahead (although some think no further than the next earnings number.) The Governor of a Central Bank has a different time horizon to the staffer who is rushing to complete the quarterly forecast by deadline.  If you talk to the staffer, you will get a different take on the issues, even if he reads most of the same information as the Governor.

Other deep assumptions include how you find truth (trial and error, say, or diligent analysis, or official tradition), the nature of space (how space like offices are allocated and how it affects internal relationships and privacy). the nature of human nature itself (are people cooperative or hostile, good or bad?), the nature of human activity (how much initiative or passivity is expected? What is work and what is distraction?) and the nature of relationships (how much hierarchy, how much individualism? Is power based on authority, charisma, consensus, law?).

All of these assumptions, just because they seem so natural and right within an organization, have the ability to completely trip someone up when they are trying to deal with new situations or communicate with outsiders. That is before we even deal with how culture varies between nationalities, or how culture changes as an organization goes from start-up to middle-age to bureaucratic corporate decline.

2017-05-11T17:32:49+00:00 December 8, 2013|Assumptions, Decisions, Organizational Culture and Learning, Risk Management|