Organizational Culture and Learning

/Organizational Culture and Learning

Rogues or Blind Spots?

I looked at how Volkswagen could go so wrong the other day. There is almost always a rush to blame human error or subordinates, I said. Some of them may be genuinely criminal and deserve jail time. But the problem is more usually also systemic: management doesn't see or want to see problems coming.

Now here's a piece in Harvard Business Review on the issue. Of course, it was rogue employees, says Volkswagen management.

Testifying unhappily before America’s Congress, Volkswagen of America CEO Michael Horn adamantly and defiantly identified the true authors of his company’s disastrous “defeat device” deception: “This was not a corporate decision. No board meeting or supervisory meeting has authorized this,” Horn declared. “This was a couple of rogue software engineers who put this in for whatever reason.”

Ach, du lieber! Put aside for the moment what this testimony implies about the auto giant’s purported culture of engineering excellence. Look instead at what’s revealed about Wolfsburg’s managerial oversight: utter and abysmal failure. No wonder Chairman and CEO Martin Winterkorn had to resign. His “tone at the top” let roguery take root.

The author is an MIT expert on the software processes at issue.

Always look to the leadership. Where were Volkswagen’s code reviews? Who took pride and ownership in the code that makes Volkswagen and Audi cars run? For digitally-driven innovators, code reviews are integral to healthy software cultures and quality software development.

Good code is integral to how cars work now, he says. And to write good code the Googles and Facebooks of the world have code review systems with some form of openness, even external advice or review, so that murky code is found out.

As we learned from financial fiascoes and what will be affirmed as Volkswagen’s software saga unwinds, rogues don’t exist in spite of top management oversight, they succeed because of top management oversight.

It can be comforting , in a way, to think that problems or bad decisions occur only because of individual stupidity or bias or error or ignorance. If people, and organizations as a whole don't even consciously see many problems coming, or ignore trade-offs, it's more disturbing and harder to solve. Most information and analysis will tend to reinforce their point of view. Single-minded mania also often produces short-run financial success.

Until the darkness comes. Leaders have to be held accountable for finding their blind spots. They can't claim ignorance after the fact.

 

Volkswagen: What were they thinking?

It may turn into one of the most spectacular corporate disasters in history. What were Volkswagen thinking? Even after it became apparent that outsiders had noticed a discrepancy in emissions performance in on-the-road tests, the company still kept stonewalling and continued to sell cars with the shady software routines.

We won't know the murky, pathological details for a while. But understanding how this happens is urgent. If you ignore this kind of insidious problem, billion-dollar losses and criminal prosecutions can occur.

In fact, it's usually not just one or two “bad apples,” unethical criminals who actively choose stupid courses of action, although it often suits politicians and media to believe so. It's a system phenomenon, according to some of the classic studies (often Scandanavian) like Rasmussen and Svedung.

.. court reports from several accidents such as Bhopal, Flixborough, Zeebrügge, and Chernobyl demonstrate that they have not been caused by a coincidence of independent failures and human errors. They were the effects of a systematic migration of organizational behavior toward accident under the influence of pressure toward cost-effectiveness in an aggressive, competitive environment.

It's not likely anyone formally sat down and did an expected utility calculation, weighting financial and other benefits from installing cheat software, versus chances of being found out times consequent losses. So the usual way of thinking formally about decisions doesn't easily apply.

It's much more likely that it didn't occur to anyone in the company to step back and think it through. They didn't see the full dimensions of the problem. They denied there was a problem. They had blind spots.

It can often be hard to even find any point at which decisions were formally made. They just … happen. Rasmussen & co again:

In traditional decision research ‘decisions’ have been perceived as discrete processes that can be separated from the context and studied as an isolated phenomenon. However, in a familiar work environment actors are immersed in the work context for extended periods; they know by heart the normal flow of activities and the action alternatives available. During familiar situations, therefore, knowledge-based, analytical reasoning and planning are replaced by a simple skill- and rule-based choice among familiar action alternatives, that is, on practice and know-how.

Instead, the problem is likely to be a combination of the following:

  • Ignoring trade-offs at the top. Major accidents happen all the time in corporations because often the immediate benefits of cutting corners are tangible, quantifiable and immediate, while the costs are longer-term, diffuse and less directly accountable. They will be someone else's problem. The result is longer-term, more important goals get ignored in practice. Indeed, to define something as a technical problem or set strict metrics often embeds ignoring a set of trade-offs. So people never think about it and don't see problems coming.
  • Trade-offs can also happen because general orders come from the top – make it better, faster, cheaper and also cut costs – and reality has to be confronted lower down the line, without formally acknowledging choices have to be made. Subordinates have to break the formal rules to make it work. Violating policies in some way is a de facto requirement to keep your job, and then it is deemed “human error” when something goes wrong. The top decision-maker perhaps didn't formally order a deviation: but he made it inevitable. The system migrates to the boundaries of acceptable performance as lots of local, contextual decisions and non-decisions accumulate.
  • People make faulty assumptions, usually without realizing it. For example, did anyone think through how easy it to conduct independent on-the-road tests? That was a critical assumption on whether they would be found out.
  • If problems occur, it can become taboo to even mention them, particularly when bosses are implicated. Organizations are extremely good at not discussing things and avoiding clearly obvious contrary information. People lack the courage to speak up. There is no feedback loop.
  • Finally, if things do wrong, leaders have a tendency to escalate, to go for double -or-quits. And lose.

There scarcely seems to be a profession or industry or country without problems like this. The Pope was just in New York apologizing for years of Church neglect of the child abuse problem, for example.

But that does not mean that people are not culpable and accountable and liable for things they should have seen and dealt with. Nor is it confined to ethics or regulation. It is also a matter of seeing opportunity. You should see things. But how? That's what I'm interested in.

It's essential for organizational survival to confront these problems of misperceptions and myopia. They're system problems. And they are everywhere. Who blows up next?

Unlearn what you know, or go extinct

 

People can show remarkable dexterity (or self-deception) at deferring blame when a situation goes badly wrong, like a company collapse, or a foreign policy crisis. Or the FBI knocking on your door, asking for hard drives with top secret e-mails on them. How could someone have foreseen it? It was business-as-usual, everyone did it, it was tried and tested. The problem was impossible to see and therefore no-one is to blame. Or just bad luck.

Unfortunately, that's almost never true.

In every crisis we studied, the top managers received accurate warnings and diagnoses from some of their subordinates, but they paid no attention to them. Indeed, they sometimes laughed at them.

That’s the conclusion from one of the classic studies of organizational failures, Nystrom & Starbuck in 1984. Some people in a company generally always see problems coming (we’ve seen other research about “predictable surprises” here). But senior managers find it extremely difficult to “unlearn” parts of what they know.

Organizations succumb to crises largely because their top managers, bolstered by recollections of past successes, live in worlds circumscribed by their cognitive structures. Top managers misperceive events and rationalize their organizations’ failures. .. Because top managers adamantly cling to their beliefs and perceptions, few turnaround options exist. And because organizations first respond to crises with superficial remedies and delays, they later must take severe actions to escape demise.

Instead, the researchers say, managers try to “weather the storm” by tightening budgets, cutting wages, introducing new metrics or redoubling efforts on what has worked before. They typically waste time, and defer choices. In the meantime, the firm filters out contrary evidence, and often gets even more entrenched in its ways. This is normal corporate life.

… well-meaning colleagues and subordinates normally distort or silence warnings or dissents. .. Moreover, research shows that people (including top managers) tend to ignore warnings of trouble and interpret nearly all messages as confirming the rightness of their beliefs. They blame dissents in ignorance or bad intentions – the dissenting subordinates or outsiders lack a top managers perspective, or they’re just promoting their self-interests, or they’re the kind of people wo would bellyache about almost anything. Quite often, dissenters and bearers of ill tidings are forced to leave organizations or they quit in disgust, thus ending the dissonance.

And then one morning it turns out it’s too late, and there is no more time.

The only solution that reliably works, Nystrom and Starbuck say, is to fire the whole top management team if there are signs of a crisis. All of them.

But top managers show an understandable lack of enthusiasm for the idea that organizations have to replace their top managers en masse in order to escape from serious crises. This reluctance partially explains why so few organizations survive crises.

The only real hope is to adapt before you have to. But the much more likely outcome is senior decision-makers end up eliminated, and destroy their companies and their company towns and employees and stakeholders along the way.

Just think about what might fix this. It isn’t more information or big data , as it will probably be ignored or discounted. It isn’t forecasts or technical reports or new budgets or additional sales effort. It isn’t better or more rigorous theory, or forcing the troops to work harder.

It’s a matter of focusing on and looking for signs about how people change their minds. It’s about figuring out what might count as contrary evidence in advance, and sticking to it. If you’re a senior decision-maker, this might be the only thing that saves you, before some outside investor or opponent decides the only hope is to wipe the slate clean, including you. If you figure out you need it in time. Will you?

 

Noticing the “predictable surprises”

If there’s one deep lesson I’ve learned from years delving into policy and decision-making, it’s that the biggest surprises are hidden in plain sight. Of course, there are genuine (temporary) secrets out there, and many people try to make a living from issuing spurious predictions about the future. But the things which most move the needle are a matter of noticing transparent, open things in the present, not looking for scoops or trying to foretell the future. It’s a matter of actually listening to what you are hearing.

Ironically, by avoiding making mistakes about the current state of things you will almost certainly anticipate the future better anyway,  because you won’t be fooling yourself about the situation.

Here’s another way to look at the issue. Max Bazerman teaches decision-making at Harvard. He and a co-author wrote a book named Predictable Surprises: The Disasters You Should Have Seen Coming, and How to Prevent Them. A predictable surprise, they say, is “an event or set of events that take an individual or group by surprise, despite prior awareness of all the information necessary to predict the events and their consequences.”

Take the 9/11 attacks, they say. Granted, it was hard to predict the particular hijackers would attack particular targets on a specific date. But there had been ever increasing data that showed airline security was a deepening problem for over ten years . And little or nothing was done about it. People had been warning about conflicts of interest in accounting for a decade before Enron and Arthur Anderson melted down, but preventative action was avoided.  The same thing happens almost every day in one corporation or government department or another.

The key traits, they say are;

  • leaders knew a problem existed and would not solve itself.
  • a bad outcome was almost inevitable because organizational members knew the problem was getting worse over time;
  • but they also knew that fixing the problem would incur costs in the present, while benefits of taking action would be delayed.
  • politicians and leaders know that shareholders or constituents will notice the immediate cost. But the leaders also suspect they will get little reward for avoiding a much worse disaster that is ambiguous and distant – so they “often cross their fingers and hope for the best.”
  • In any case, people typically like to maintain the status quo. If there is no stark crisis, we tend to keep doing things the way we have always done them. “Acting to avoid a predictable surprise requires a decision to act against this bias and to change the status quo. By contrast, most organizations change incrementally, preferring short-term fixes to long-term solutions.”
  • And  usually a “small vocal minority benefits from inaction” and blocks action for their own private benefit, even when the organization is desperate for a solution.

The result is: to an astonishing extent people don’t take action based on what they know, or deny contrary information altogether. It’s one of the many reasons why organizations often get into trouble. It’s one reason why in so many crises we don’t need more information or intelligence. We need to act on what we have.

What can be done about it? The authors offer a recognition-prioritization-mobilization sequence of steps to deal with it.

But it starts with recognition.

Positive illusions, self-serving biases, and the tendency to discount the future may prevent people from acknowledging that a problem is emerging. If their state of denial is strong enough, they may not even “see” the storm clouds gathering.

Who can say they haven’t seen examples of this? How many organizations take steps to guard against it?

Without recognition, nothing else works. You solve the wrong problems. You forecast the wrong things. That is the primary challenge that organizations face. It’s the assumptions people make that are the prime cause of trouble.  There are so many forces which work in all organizations to distort or reduce recognition. You need to be aware of it, and take active steps to deal with it – or meekly wait for the predictable surprise to find you.

 

 

By | February 1, 2015|Assumptions, Decisions, Inertia, Organizational Culture and Learning|Comments Off on Noticing the “predictable surprises”

The most important executive skill? “Thinking about your own thinking”

If only there was one key thing that a leader or trader could do to develop winning strategies, when markets get more challenging all the time.

There is. According to  UCLA’s Richard Rumelt, one of the most prominent thinkers on management strategy, in  Good Strategy Bad Strategy: The Difference and Why It Matters:

Being strategic is being less myopic – less shortsighted – than others. You must perceive & take account what others do not, be they colleagues or rivals. Being “strategic” largely means being less myopic than your undeliberate self.

It does not mean exhaustive information gathering and formal analysis. It does not mean two-hundred page binders of beautiful charts and tables (or elaborate central bank forecast reports)  It does not mean elaborate forecasts or expected-utility-based  “decision science”, let alone big data, although many businesses fervently believe in them.

We are all aware of the basic formal approach to making a decision. List the alternative, figure out the cost or value associated with each, and choose the best. But in [numerous examples he cites]  you cannot perform this kind of clean “decision ” analysis. Thus, the most experienced executives are actually the quickest to sense that a real strategic situation is impervious to so-called decision analysis. They know that dealing with a strategic situation is, in the end, all about making good judgments.

So what’s the answer?

This personal skill is more important than any so-called strategic concept , tool, matrix, or analytical framework. It is the ability to think about your own thinking, to make judgments about your own judgments.(p267).

It’s all about awareness, not microeconomic rationality or “paralysis by analysis.” I’ve talked to hundreds of the most senior policymakers and traders over the years. If there’s one thing I notice about the best of them, it’s they tend to be open and curious and engaged.  They want to think about issues from different angles.  It’s more about ability to test and learn than bias.

There’s been too much emphasis on “bias” in decisions. The solution is not some abstract ideal of formal analytical perfection that may take six months to arrive at. Instead, the thing to look at most is how people think and change their minds. It’s adaptiveness that counts – but as everyone knows, organizations find it extremely hard to adapt, and most people fiercely resist changing their minds.

The same conclusion recurs in different language whenever people look closely at successful decisions. (Compare Philip Tetlock’s distinction between “know one big thing” hedgehogs and “know many things” foxes. )

By | November 28, 2014|Adaptation, Decisions, Organizational Culture and Learning|Comments Off on The most important executive skill? “Thinking about your own thinking”

System Blindness

Good news: GDP grew at 4% and the winter surprise has faded. As usual, there is endless analysis available for free. These days we swim in a bottomless ocean of economic commentary.

Let’s turn to something that might give people an edge in making decisions instead.  One of the main reasons people and companies get into trouble is that they don’t think in terms of systems. I noted one major source of this approach was Jay Forrester’s work at MIT beginning in the 1960s. His successor at MIT is John Sterman, who calls it system blindness.

Sterman documents the multiple problems that decision-makers  have  dealing with dynamic complexity in his best-known shorter paper. We haven’t evolved to deal with complex systems, he says. Instead, we are much quicker to deal with things with obvious, direct, immediate, local causes. (See bear. Run.)

So people have inherent deep difficulties with feedback loops, for example.

Like organisms, social systems contain intricate networks of feedback processes, both self-reinforcing (positive) and self-correcting (negative) loops. However, studies show that people recognize few feedbacks; rather, people usually think in short, causal chains, tend to assume each effect has a single cause, and often cease their search for explanations when the first sufficient cause is found. Failure to focus on feedback in policy design has critical consequences.

As a result, policies and decisions often become actively counterproductive, producing unexpected side-effects and counter-reactions. Such ‘policy resistance’ means major decisions frequently have the opposite effect to that intended (such as building major roads but producing even more congestion, or suppressing forest fires and producing much bigger blazes.)

People also have serious problems understanding time and delays, which often leads to oversteer at the wrong times and wild oscillation and swings.  They have difficulty with short-term actions that produce long-term effects. They assume causes must be proportionate to effects. (Think of the long and variable lags in monetary policy, and the tendency to oversteer.)

Decision-makers have problems with  stocks and flows. In essence, a stock is the water in the bath. A flow is the water running from the tap.

People have poor intuitive understanding of the process of accumulation. Most people assume that system inputs and outputs are correlated (e.g., the higher the federal budget deficit, the greater the national debt will be). However, stocks integrate (accumulate) their net inflows. A stock rises even as its net inflow falls, as long as the net inflow is positive: the national debt rises even as the deficit falls—debt falls only when the government runs a surplus; the number of people living with HIV continues to rise even as incidence falls—prevalence falls only when infection falls below mortality. Poor understanding of accumulation has significant consequences for public health and economic welfare.

People also fail to learn from experience, especially in groups. They don’t test beliefs. Instead, they see what they believe, and believe what they see. They use defensive routines to save face. They avoid testing their beliefs, especially in public.

Note that these are not the problems that are getting prime attention in behavioral economics., let alone mainstream economics. Why don’t system ideas get more attention? Sterman notes that more generally, people often fail to learn from hard evidence.

More than 2 and one-half centuries passed from the first demonstration that citrus fruits prevent scurvy until citrus use was mandated in the British merchant marine, despite the importance of the problem and unambiguous evidence supplied by controlled experiments.

For me, one additional major reason might be we are generally so used to the analytic approach: i.e.  break things down into their component parts and examine each separately. This has worked extremely well for decades in science and business, when applied to things which don’t change and adapt all the time. Instead, systems thinking is about looking at the interaction between elements. It is synthesis, “joining the dots”, putting the pieces together and seeing how they work and interrelate in practice.

And that might be an additional explanation for the hedgehog versus fox distinction. You recall the fundamentally important research that finds that foxes, “who know many things”, outperform hedgehogs “who know one big thing”  at prediction and decision. Hedgehogs are drawn more to analysis and universal explanation; foxes are drawn more to synthesis and observation.

As a result, hedgehogs have much greater difficulty with system thinking. Foxes are more likely to recognize and deal with system effects. If you confront a complex adaptive system (like the economy or financial markets,) that gives foxes an edge.

 

 

Who gave the order to shoot down a civil airliner?

The loss of flight MH17 over Ukraine, with debris, bodies and dead children's stuffed animals strewn over the remote steppe, is unspeakably tragic. Major Western countries are being swift to accuse and condemn Russian rebel groups, and by extension Putin, of a repugnant crime.

It's unlikely, however, that someone identified a Malaysian airliner overhead and deliberately chose to shoot it down. It's more probable Russian rebels didn't have the skill or backup to know they were firing at a civilian airliner.

It might not change the moral blame attached to the incident. At best it would be awful negligence. It might not affect the desire to hold leaders accountable.

But it ought to make people stop and think about how decisions get made as well. The near-automatic default in most public and market discussion is to think in rational actor terms. Someone weighed the costs and benefits of alternatives. They chose to shoot down the airliner. So find the person who made that horrible choice.

So how do you deal with a world in which that doesn't happen most of the time? Where people shoot down airliners without intending to? When the financial system crashes, or recessions happen, or the Fed finds it hard to communicate with the market? Where people ignore major alternatives, or use faulty theories and data? When they fail to grasp the situation and fail to anticipate side-effects?

There's actually a deeper and more important answer to these questions.

Who was to blame for Challenger?

Let's go back to the example of the Challenger Shuttle Disaster I mentioned in the last post, because it's one of the most classic studies of failed decisions in recent times. Here was an organization – NASA – which was clearly vastly more skilled, disciplined and experienced than Russian rebels. But they still encountered a catastrophic misjudgment and failure. Seven crew died. Who was to blame?

The initial public explanation of the shuttle disaster, according to the author Diane Vaughan, was middle management in NASA deliberately chose to run the risk in order to keep to a launch schedule. Like so many corporations, production pressure meant safety was ignored. Management broke rules and failed to pass crucial information to higher levels.

In fact, after trawling through thousands of pages later released to the National Archives and interviewing hundreds of people, she concluded that no-one specifcally broke the rules or did anything they considered wrong at the time.

On the one hand, this is good news – genuinely amoral, stupid, malevolent people may be rarer than you'd think from reading the press. In another way, though, it is actually much more frightening.

NASA, after all, were the original rocket scientists – dazzlingly able people who had sent Apollo to the moon some years before. NASA engineers understood the physical issues they were dealing with far better than we are ever likely to be able to understand the economy or market behavior.

NASA had exceptionally thorough procedures and documentation. They made extensive effrots to share information. They were rigorous and quantitative. In fact, ironically the latter was part of the problem, because observational data and photographic evidence about penetration of the O-ring seal was discounted as too tacit and vague.

So what was the underlying explanation of the catastrophe? It wasn't simply a technical mistake.

Possibly the most significant lesson from the Challenger case is how environmental and organizational contingencies create prerational forces that shape worldview, normalizing signals of potential danger, resulting in mistakes with harmful human consequences. The explanation of the Challenger launch is a story of how people who worked together developed patterns that blinded them to the consequences of their actions. It is not only about the development of norms but about the incremental expansion of normative boundaries: how small changes – new behaviors that were slight deviations from the normal course of events – gradually became the norm, providing a basis for accepting additonal deviance. p409

Conformity to norms, precedent, organizational structure and environmental conditions, she says,

congeal in a process that can create a change-resistant worldview that neutralizes deviant events, making them acceptable and non-deviant.

Organizations have an amazing ability to ignore signals something is wrong, including, she says, the history of US involvement in Vietnam.

The upshot? Often individuals and corporations do carry out stupid and shortsighted activities (often because they ignore trade-offs.) But more often they have an extraodinary ability to ignore contrary signals, especially if they accummulate slowly over time, and convince themselves they are doing the right thing.

People develop “patterns that blind them to the consequences of their actions” and develop change-resistant worldviews. That's why I look for blind spots, becuse research shows it is the key to understanding decisions and breakdowns. You can look for those patterns of behavior. One sign, for example, is the slow, incremental redefinition, normalization and acceptance of risk that Vaughan describes.

I'm going to look much more at systems in coming posts.

 

 

By | July 20, 2014|Confirmation bias, Decisions, High-Reliability Organizations, Organizational Culture and Learning, Risk Management|Comments Off on Who gave the order to shoot down a civil airliner?

Technology isn’t the problem, Short-term management thinking is.

Is technology going to displace workers and hollow out the middle class? Or is technology stagnating instead and bringing the great era of productivity gains to an end? Here's a good point in the Harvard Business Review:

It’s a lively debate, but here’s the perspective that isn’t being voiced: There’s more to progress than technological innovation. Breakthroughs can also result from innovations in management.

Past work by another economist, Paul Romer, helps make the point. He explains that the history of progress is a history of two types of innovation: Inventions of new technologies, and introductions of new laws and social norms. We can make new tools, and we can make new rules. The two don’t always march in lockstep. In a period of time where one type of innovation flags, the other type can sometimes forge ahead.

Go back to some of Peter Drucker's ideas, the authors say.

We would also argue for a different managerial mindset toward productivity and the best use of technology – specifically to adopt what Peter Drucker called a human centered view of them. Cowen is right when he describes today’s technologies as displacers of human work, but that is not the only possibility. Managers could instead ask: How can we use these tools to add power to the arm (and the brain) of the worker? How could they enable people to take on challenges they couldn’t before?

Management has become too short-termism in thinking, they argue, which often crowds out the innovation in how to use technology to alter social practices.

Perhaps a better way to put it is that social and institutional innovation always takes longer than technological innovation. Most of the current institutional framework we take for granted – joint-stock corporations with corporate personality, accounting (more than book-keeping), regulation, consumer credit, white-collar jobs – developed after the industrial revolution. They were invented to cope with economic change and social upheaval.

The answer to current economic challenges is not necessarily, as the liberal left thinks, massive new redistribution schemes between the “1%” and an increasingly impoverished lower middle class and poor. It's new social feedback loops and institutional innovation. And better management figuring out new ways to add value.

By | May 19, 2014|Adaptation, Cyclical trends, Organizational Culture and Learning, Technology|Comments Off on Technology isn’t the problem, Short-term management thinking is.

“Strategies grow like weeds in a garden”. So do trades.

How much should you trust “gut feel” or “market instincts” when it comes to making decisions or trades or investments? How much should you make decisions through a rigorous, formal process using hard, quantified data instead? What can move the needle on performance?

In financial markets more mathematical approaches have been in the ascendant for the last twenty years, with older “gut feel” styles of trading increasingly left aside. Algorithms and linear models are much better at optimizing in specific situations than the most credentialed people are (as we’ve seen.) Since the 1940s business leaders have been content to have operational researchers (later known as quants) make decisions on things like inventory control or scheduling, or other well-defined problems.

But rigorous large-scale planning to make major decisions has generally turned out to be a disaster whenever it has been tried. It has generally been about as successful in large corporations as planning also turned out to be in the Soviet Union (for many of the same reasons). As one example, General Electric originated one of the main formal planning processes in the 1960s. The stock price then languished for a decade. One of the very first things Jack Welch did was to slash the planning process and planning staff.  Quantitative models (on the whole) performed extremely badly during the Great Financial Crisis. And hedge funds have increasing difficulty even matching market averages, let alone beating them.

What explains this? Why does careful modeling and rigor often work very well on the small scale, and catastrophically on large questions or longer runs of time? This obviously has massive application in financial markets as well, from understanding what “market instinct” is to seeing how central bank formal forecasting processes and risk management can fail.

Something has clearly been wrong with formalization. It may have worked wonders on the highly structured, repetitive tasks of the factory and clerical pool, but whatever that was got lost on its way to the executive suite.

I talked about Henry Mintzberg the other day. He pointed out that contrary to myth, most successful senior decision-makers are not rigorous or hyper-rational in planning, Quite the opposite. In the 1990s he wrote a book, The Rise and Fall of Strategic Planning, which tore into formal planning and strategic consulting (and where the quote above comes from.)

There were three huge problems, he said. First, planners assumed that analysis can provide synthesis or insight or creativity. Second, that hard quantitative data alone ought to be the heart of the planning process. Third, assuming the context for plans is stable, or predictable. All of them were just wrong. For example,

For data to be “hard” means that they can be documented unambiguously, which usually means that they have already been quantified. That way planners and managers can sit in their offices and be informed. No need to go out and meet the troops, or the customers, to find out how the products get bought or the wards get flight to what connects those strategies to that stock price; all that just wastes time.

The difficulty, he says, is that hard information is often limited in scope, “lacking richness and often failing to encompass important noneconomic and non-quantitiative factors.” Often hard information is too aggregated for effective use. It often arrives too late to be useful. And it is often surprisingly unreliable, concealing numerous biases and inaccuracies.

The hard data drive out the soft, while that holy ‘bottom line’ destroys people’s ability to think strategically. The Economist described this as “playing tennis by watching the scoreboard instead of the ball.” ..  Fed only abstractions, managers can construct nothing but hazy images, poorly focused snapshots that clarify nothing.

The performance of forecasting was also woeful, little better than the ancient Greek belief in the magic of the Delphic Oracle, and “done for superstitious reasons, and because of an obsession with control that becomes the illusion of control. ”

Of course, to create a new vision requires more than just soft data and commitment: it requires a mental capacity for synthesis, with imagination. Some managers simply lack these qualities – in our experience, often the very ones most inclined to rely on planning, as if the formal process will somehow make up for their own inadequacies. … Strategies grow initially like weeds in a garden: they are not cultivated like tomatoes in a hothouse.

Highly analytical approaches often suffered from “premature closure.”

.. the analyst tends to want to get on with the more structured step of evaluation alternatives and so tends to give scant attention to the less structured, more difficult, but generally more important step of diagnosing the issue and generating possible alternatives in the first place.

So what does strategy require?

We know that it must draw on all kinds of informational inputs, many of them non-quantifiable and accessible only to strategists who are connected to the details rather than detached from them. We know that the dynamics of the context have repeatedly defied any efforts to force the process into a predetermined schedule or onto a predetermined track. Strategies inevitably exhibit some emergent qualities, and even when largely deliberate, often appear less formally planned than informally visionary. And learning, in the form of fits and starts as well as discoveries based on serendipitous events and the recognition of unexcited patterns, inevitably plays a role, if not the key role in the development of all strategies that are novel. Accordingly, we know that the process requires insight, creativity and synthesis, the very thing that formalization discourages. [my bold]

If all this is true (and there is plenty of evidence to back it up), what does it mean for formal analytic processes? How can it be reconciled with the claims of Meehl and Kahneman that statistical models hugely outperform human experts? I’ll look at that next.

By | May 2, 2014|Adaptation, Assumptions, Big Data, Decisions, Expertise, Foxes and Hedgehogs, Insight & Creativity, Organizational Culture and Learning, Quants and Models, Risk Management|Comments Off on “Strategies grow like weeds in a garden”. So do trades.

Picking higher-hanging fruit

Pessimism about technology and growth has become fashionable recently, following arguments by Robert Gordon and Tyler Cowen. The “low-hanging fruit” is gone, says Cowen in The Great Stagnation: How America Ate All the Low-Hanging Fruit of Modern History.

The great economic historian Joel Mokyr is having none of it, however. People develop better tools, which lead to better new technologies, which lead to better tools.

If this historical model holds some truth, the best may still be to come for modern societies. Only in recent decades has science learned to use high-powered computing and the storage of massive amounts of searchable (and thus accessible) data at negligible costs. The vast array of instruments and machines that can see, analyze, and manipulate entities at the sub-cellular and sub-molecular level promise advances in areas that can be predicted only vaguely. But these tools, to beat Cowen’s metaphor into the ground, allow us to build taller ladders to pick higher-hanging fruit. We can also plant new trees that will grow fruits that no one today can imagine.

I previously mentioned Mokyr here, especially the interaction between formal expertise and practical know-how in the take-off of the industrial revolution.

The question is, of course, how it affects the labor market.

 

By | April 5, 2014|Adaptation, Economics, Organizational Culture and Learning, Technology|Comments Off on Picking higher-hanging fruit