Woodenheaded Disasters

So we have a nervous start to the New Year, with a plunge in the Chinese stock market and tensions in the Gulf. There is a widespread sense that the establishment in many countries is “out of touch” and leadership is faltering. I was arguing the other day that there is often a deeper pattern to these problems. The Republican establishment in the US ignored all evidence that didn’t match their preconceptions until an anointed prince like Jeb Bush  was running at only 4% in primary polling.

This is in fact one of the deepest patterns in history. The great historian Barbara Tuchman pondered in The March of Folly: From Troy to Vietnam why policymakers and leaders so often do things which seem self-defeating and stupid.

A phenomenon notable throughout history regardless of place or period is the pursuit by governments of policies contrary to their own interests.. Why does intelligent mental process seem so often not to function?”

She called it Wooden-headedness.

Wooden headedness, the source of self deception is a factor that plays a remarkably large role in government. It consists in assessing a situation in terms of preconceived fixed notions while ignoring or rejecting any contrary signs.  It is acting according to wish while not allowing oneself to be deflected by the facts.

It is all too easy to decide a policy was wrong in retrospect, of course. But wooden-headedness means policies or decisions which are self-defeating and ruinous based on things which are clearly apparent at the time.  And it is remarkably common.  She traces it from ancient Greece through the policies of Renaissance Popes, Phillip II of Spain and the decision of the Japanese government to go to war with the United States in 1941.  US intervention in Vietnam in turn, led by the “best and the brightest” in the Kennedy administration, was beset by folly.

Folly’s appearance is independent of era or locality; it is timeless and universal, although the habits and beliefs of a particular time determine the form it takes.

One of the most interesting examples is the folly of the British government in its policy on its American colonies. The governing elite believed that trade with and possession of the thirteen colonies was utterly essential to Britain’s wealth and future, but insisted on the right to tax without the colonist’s consent.   It was, she says, the unworkable pursued at the expense of the possible.

Instead of confronting trade-offs or looking for alternatives, politicians in London were largely diverted by the game of faction, who’s in, who’s out.  And here is the most remarkable fact she notes: No British ministers visited America between 1763 and 1775 despite thinking the fate of the empire depended on possession of America.

There is often remarkable reluctance to go and look at the facts on the ground with a fresh eye. And it is not easy to do, whether it is sailing the Atlantic in the 1760s or working out why a business product is withering in the marketplace.  But it is also essential. Instead of prediction, it is a matter of taking a fresh look at what is already there. It is about discovering what you’re not seeing. It is about blind spots.

Most predictions about what will happen in 2016 will tun out to be wrong, of course. But at least we can try to look for contrary evidence and test assumptions about views, so we are not woodenheaded. Like most establishments and bureaucracies.


2017-05-11T17:32:40+00:00 January 4, 2016|Assumptions, Books, Decisions|

When looking at “bias” is not enough

How do people miss the crucial factors that can destroy their companies, or projects, or policies?  One important answer is bias, which includes problems like the availability heuristic  or anchoring. Those are analyzed by the Judgment and Decision-Making (JDM) field in psychology, and its offshoot Behavioral Economics.  Most often, these disciplines look at departures from normative rationality in lab experiments.

But if you look around your own company or organization, it takes you less than three seconds to realize that culture is just as important, and it does not fit into behavioral economics.  Culture is the basic fabric of how organizations think and act. It is why mergers and acquisitions so often go wrong, or why companies fail to adapt to change.

In fact, there is a very different field of research into company culture and its many potential blindspots. The most important classic work is Edgar Schein’s Organizational Culture and Leadership.

Culture, says Schein, is the accumulated shared learning of a group. It is created as the group confronts and initially solves its basic problems of survival. As those courses of action

.. continue to be successful in solving the group’s internal and external problems, they come to be taken for granted and the assumptions underlying them cease to be questioned or debated. A group has a culture when it has had enough of a shared history to have formed such a set of shared assumptions.


Then it  becomes so taken for granted that any attempt to change it can create high anxiety.

Rather than tolerating such anxiety levels we tend to want to perceive the events around us as congruent with our assumptions, even if it that means distorting, denying projecting or in other ways falsifying to ourselves what may be going on around us.  It is in this psychological process that culture has its ultimate power. Culture as a set of basic assumptions defines for us what to pay attention to, what things mean, how to react emotionally to what is going on, and what actions to take in various kinds of situations.

Once we have developed an integrated set of such assumptions, what might be called a thought world or mental map, we will be maximally comfortable with others who share the same set of assumptions and very uncomfortable and vulnerable in situations where different assumptions operate either because we will not understand what is going on, or , worse, misperceive and misinterpret the actions of others.

[my bold]

Just think about the potential for damage and error, and how often you have likely seen problems like this in your own experience.

The problem is major decisions most often go wrong not because you get the information or calculations wrong, but because you get the assumptions wrong. National intelligence is a prime example.  The news is full of NSA success in gathering data. But it is making sense of the data which is the real challenge.

Culture is one of the most important reasons  decision-makers frequently fail to ask the critical questions in the first place. Schein discusses which assumptions are usually most important, how new hires absorb culture, how leaders try to alter it, and how culture changes through the evolution of firms from start-ups to declining corporate monoliths.

You can’t escape shared assumptions and culture, he says, nor would you want to. Without shared assumptions there can be no group, just a collection of people. But almost by definition, it also means people are incapable of seeing many of the most critical problems from within a particular company culture.  Organizations find it hard to learn.


Knowing what versus knowing how

I was talking in the last post about Nicholas Nassim Taleb, who ferociously attacks theoretical experts like economists and risk managers in banks. He argues for the practical tacit knowledge acquired by tinkering over long periods instead. Of course, this is a much longer-term and deeper issue. One of the most important works in recent economic history, Joel Mokyr’s The Gifts of Athena: Historical Origins of the Knowledge Economy, traces the two kinds of knowledge through the genesis of the industrial revolution:

Useful knowledge as employed throughout the following chapters describes two types of knowledge. One is knowledge “what” or propositional knowledge (that is to say, beliefs) about natural phenomena and regularities.’ Such knowledge can then be applied to create knowledge “how,” that is, instructional or prescriptive knowledge, which we may call techniques.’ In what follows, I refer to propositional knowledge as 0-knowledge and to prescriptive knowledge as A-knowledge. If 0 is episteme, A is techne.

The industrial revolution was in large part brought about by practitioner tinkering and “A-knowledge”, he says, and the formal sciences did not catch up until later. And insofar as prescriptive knowledge mattered, it was more because of changes in access to it.

In other words, changes in the overall size of 0 (what was known) may have been less important in the Industrial Revolution than the access to that knowledge. Moreover, the process was highly sensitive to outside stimuli and incentives. The social and institutional environment has always been credited with a central role in economic history

So it is true that practitioners account for more growth in knowledge, and certainly more useful applications, than formal experts would allow. But there is often a creative dialogue between the two kinds of knowledge.

The historical question is not whether engineers and artisans “inspired” the scientific revolution or, conversely, whether the Industrial Revolution was “caused” by science. It is whether practical men could have access to propositional knowledge that could serve as the epistemic base for new techniques. It is the strong complementarity, the continuous feedback between the two types of knowledge, that set the new course.

Without consolidation and focusing in knowledge “what”, knowledge “how” wastes time on unproductive tinkering and becomes less effective.

What has this got to do with immediate economic policy and decisions? It is about how much you should believe formal experts, like academic economists and planners and forecasters, and how much practitioners. The answer is both have a role. But the importance of practical wisdom is usually underrated, especially by the formal experts.

2013-11-22T11:52:46+00:00 September 17, 2013|Books, Decisions, Expertise, Perception|

The most calamitous failures of prediction

I’ve been talking recently about how the biggest problem with decisions isn’t information, but seeing what you want to see. It is a matter of how you frame issues, and how you resist or learn from events.

Nate Silver has become famous for his quantitative models of the US election, which outperformed most pundits. But, he says in his excellent book The Signal and the Noise: Why So Many Predictions Fail — but Some Don’t, what matters most is not the data or regressions. You have to get the frame right..

The most calamitous failures of prediction usually have a lot in common. We focus on those signals that tell a story about the world as we would like it to be, not how it really is. We ignore the risks that are hardest to measure, even when they pose the greatest threats to our well-being. We make approximations and assumptions about the world that are much cruder than we realize. We abhor uncertainty, even when it is an irreducible part of the problem we are trying to solve.

Markets focus so much on what the latest economic data mean for growth, the economic cycle and inflation. It’s the bread and butter of market commentary, even if it has become commoditized and everyone knows forecasts are generally inaccurate and overconfident.

But instead, what really matters is what incoming evidence says about the assumptions and frames and expectations people hold – and whether the evidence leads them to change their view. There is very little evidence you can make money out of predicting the economic cycle. But misperception is pervasive, and is the most critical factor in your most critical decisions. And you can’t easily see it yourself.

Criticism is useless without curiosity

I looked at advice by Buffett and Dalio earlier this week. Seek out criticism, they say.

It isn’t criticism for its own sake which is valuable, however. You don’t necessarily gain much from someone yelling at you or tell you you are doing everything wrong. Neither do you necessarily gain by triumphantly refuting someone else’s objections.

Instead, the trick is to be able to step outside your own perspective and see how facts could fit another explanation. It’s understanding the difference between perspectives which is the key, rather than just arguing loudly from different positions.

There’s an interesting case study in Gary Klein’s book Streetlights and Shadows: Searching for the Keys to Adaptive Decision Making about  the limits of feedback, including the ability to make sense of it or shift mental models. Klein specializes in “naturalistic” decision-making – how skilled people actually make urgent decisions in the field under pressure, rather than at leisure with spreadsheets. I mentioned one of his previous books in Alucidate’s conceptual framework here.

Doug Harrington was a highly skilled pilot who had landed on aircraft carriers in F-4 aircraft hundreds of times. But he kept failing to qualify to land an A-6 aircraft, despite continued feedback from landing signal officers (LSO) on the ship. “Veer right”, the ground control repeatedly told him on every approach. But the feedback didn’t help work out what was wrong. He faced the immediate end of his naval flying career, or worse, a crash into the back of a ship.

The Chief Landing Officer eventually asked Harrington how he was lining up the plane. It turned out the A-6 has the cockpit laid out with side-by-side seats rather than navigator behind the pilot. That slight difference in perspective threw off the pilot’s  habitual way of lining up the nose of the plane against the carrier. Feedback and criticism alone didn’t help him figure out what was wrong. A small shift in perspective did.

The LSO was not a coach or a trainer. He didn’t give any lectures or offer any advice. He didn’t have to add any more feedback. What he brought was curiosity. He wanted to know why a pilot as good as Harrington was having so much trouble. He used Harrington ‘s response to diagnose the flaw in Harrington’s mental model. Then the LSO took the interaction another step. Instead of just telling Harrington what was wrong, the LSO found an easy way for Harrington to experience it. Harrington already suspected something might be wrong with his approach. The simple thumb demonstration was enough for Harrington to form a new mental model about how to land an A-6.

Mental models, or mindsets, are more important than criticism or argument in isolation.

It’s not just a matter of criticism, but curiosity. I’ve always found the most successful decision-makers and traders are the ones who want to know how other people think


2013-11-22T12:04:20+00:00 May 8, 2013|Books, Decisions, Mindfulness, Perception, Psychology|

The Gorilla in the Room

The post about healthcare below shows how people can easily pay attention to the wrong things. This is a pervasive problem,  and helps explain why policymakers are so often surprised by sudden shifts in market attention.

The most famous experiment in psychology conducted in the last twenty years relates to this. You’re asked to count how many times the white team passes the ball in this video.

About half the people who watch the video consistently completely fail to notice a gorilla walks through the players and stays on camera for nine seconds. If our attention is elsewhere, we literally often can’t see the gorilla in the room.

It is more than just a matter of visual perception as well. The people who conducted the experiment (inevitably) have written a book about it: The Invisible Gorilla: How Our Intuitions Deceive Us.

The gorilla study illustrates, perhaps more dramatically than any other, the powerful and pervasive influence of the illusion of attention: We experience far less of our visual world than we think we do.  In essence, we know how vividly we see some aspects of our world, but we are completely unaware of those aspects of our world that fall outside of that current focus of attention. Our vivid visual experience masks a striking mental blindness—we assume that visually distinctive or unusual objects will draw our attention, but in reality they often go completely unnoticed.

The bigger lesson is we very often fail to see things we do not expect to see.  Other studies show drivers often hit motorcyclists because they literally do not see them because they  do not expect to see them.

Much like the subjects in our gorilla experiment, drivers often fail to notice unexpected events, even ones that are important.

I’ll come back to this issue of expectations and predispostions another time.

There’s another point here, too. One of the reasons that psychology is going through a boom right now is that its  experiments often turn into vivid anecdotes that people remember far better than mathematical models or statistical tests. The gorilla beats old-style running rats around mazes. In fact, Daniel Kahneman remarks in the video I posted here he believes a major reason his original paper got so much attention was because there were plenty of interesting examples.

The mind finds it very easy to go from the particular, from vivid anecdotes or stories,  to the general, he says. But people don’t like to go the other way and apply general rules to particular examples. They don’t think psychological findings apply to them, for example.  So things like the gorilla story can become so widespread they turn into popular culture legends, which carries its own risk of distortion.

2017-05-11T17:32:58+00:00 March 13, 2013|Books, Confirmation bias, Decisions, Perception, Psychology, Videos|

Models behaving badly

The smartest quants (eventually) know their limits. Emmanuel Derman was until recently the head quant at Goldman Sachs, and pioneered many of the main quant approaches. He has now written a book, Models.Behaving.Badly.: Why Confusing Illusion with Reality Can Lead to Disaster, on Wall Street and in Life about his experiences. And he is very disillusioned with those who believe financial modelling can ever be rigorous like physics.

One should be humble in applying mathematics to markets, and be wary of overly ambitious theories. Whenever we make a model of something involving human beings, we are trying to force the ugly stepsister’s foot into Cinderella’s pretty glass slipper. It doesn’t fit without cutting off some essential parts. Financial models, because of their incompleteness, inevitably mask risk. You must start with models but then overlay them with common sense and experience.

Remember, this is one of the foremost achievers in mathematical finance speaking.

In fact, it is usually the most brilliant quants and modellers who are also conscious of the potential limitations and quick to recognize the need for context. The biggest danger from models is less-skilled people who use techniques in a cookbook-like way without thinking for themselves. A sense of limits is essential for making the right market decisions.

We should not be surprised by the failure of models, Derman says. But we should be more surprised that we bailed out people who made the wrong judgements.

I wasn’t surprised by the failure of economic models to make accurate forecasts. Any assurance economists pretend to with regard to cause and effect is merely a pose or an illusion. They whistle in the dark while they write their regressions that ignore the humans behind the equations. I was similarly unsurprised by the failure of financial models. Financial models don’t forecast; they transform one’s forecasts of the future into present value. Everyone should understand the difference between a model and reality and be unastonished at the inability of one- or two-inch equations to represent the convolutions of people and markets.

What did shock and disturb me was the abandonment of the principle that everyone had paid lip service to: the link between democracy and capitalism. We were told not to expect reward without risk, gain without the possibility of loss. Now we have been forced to accept crony capitalism, private profits and socialized losses, and corporate welfare.

It’s a mixed book. There is a great deal of biography, stories of growing up as a semi-zionist Jew in apartheid South Africa. Publishers more or less force people to put a “personal” element or narrative into most non-fiction books these days. That didn’t stop him also getting in extensive discussions of Spinoza’s Ethics, which appeals to his neat physicist’s mind. It doesn’t all hang together.  Still, it’s worth a read if only as a reminder that it tends to be the second-rate people who claim too much for models. The first-rate people know you need context and experience to interpret them as well.

2017-05-11T17:32:58+00:00 March 8, 2013|Books, Market Behavior, Quants and Models|

Howling at the Moon

The prize for the week’s wildest metaphor goes to PIMCO, talking about the Fed. No longer content with hawks and doves, now we have wolves. Central banks are like a wolf pack, you see, with an established hierarchy of alphas, betas, and omegas.

Officials already usually think “hawk” and “dove” is a bit birdbrained, trivializing the deeper arguments at stake. The labels risk  turning serious policy into a Disney-fied caricature, or now a pack contest of fur, fangs and claws. Crescenzi and the others at PIMCO are nonetheless right  that what matters most is where the leadership group on the committee is.  Bernanke is more willing to tolerate as many as three dissents if really necessary.

But the metaphors downplay how open most of the committee is to new evidence, which can rapidly change predispositions. The key is still not so much “some” or “many” in the minutes, as what will cause the committee’s view to change.

Wolves  have a habit of changing course and savaging sheep who follow blindly.

Incidentally, the best recent wolf metaphor is the Chinese novel which recently broke all publishing records in the domestic market, Wolf Totem: A Novel.


2017-05-11T17:32:59+00:00 February 22, 2013|Books, Central Banks|

Finding Patterns to Avoid Mistakes

One of the main things I argue on this blog is that there are patterns of problems and traps that you can look out for if you are alert. Perception matters. The discussion of the Fed’s view of portfolio losses in this post earlier this week is an instance of a more general problem: people nearly always find it hard to see other preferences and identities.

Here is James March, one of the most important writers on decisions, in his classic book Primer on Decision Making: How Decisions Happen.

Decision processes depend on perceptions of preferences and identities. These perceptions are subject to human error. .. Individuals develop beliefs about their own and others’ preferences and identities on the basis of incomplete information. They infer preferences and identities from actions, events and communications that are susceptible to multiple interpretations. They guess values that are obscured by problems of interpersonal and intercultural communication, as well as by deliberate falsifications and strategic misinformation.

.. As a result, decision makers are likely to be inaccurately informed about what other people want, how they intend to get it, what they think is appropriate behavior, and how they feel about other people.

March teaches at Stanford Business School will be familiar to economists as one of the founders of the behavioral theory of the firm.

In other words, if you want to make successful decisions, one thing you have to watch out for is failing to get the preferences and notions about appropriateness of other key decision-makers. The trick, of course, is to combine awareness of the general patterns with substance and knowledge of the practical realities, in this case of global macro. You need both the pattern and the knowledge of substance to spot problems.


2017-05-11T17:32:59+00:00 February 20, 2013|Books, Decisions, Organizational Culture and Learning|

Why we need checklists

How do you prevent complexity leading to a fiery crash? There is a fine story about the origin of pilot’s checklists in Atul Gawande’s The Checklist Manifesto: How to Get Things Right. Boeing’s first major four-engine plane was a disaster when it first flew in 1935. People thought it was too complicated to ever fly. 

A small crowd of army brass and manufacturing executives watched as the Model 299 test plane taxied onto the runway. It was sleek and impressive, with a 103-foot wingspan and four engines jutting out from the wings, rather than the usual two. The plane roared down the tarmac, lifted off smoothly, and climbed sharply to three hundred feet.

Then it stalled, turned on one wing, and crashed in a fiery explosion. Two of the five crew members died, including the pilot, Major P. Hill.  An investigation revealed that nothing mechanical had gone wrong. The crash had been due to “pilot error,” the report said. Substantially more complex than previous aircraft, the new plane required the pilot to attend to the four engines, each with its own oil-fuel mix, the retractable landing gear, the wing flaps, electric trim tabs that needed adjustment to maintain stability at different airspeeds, and constant-speed propellers whose pitch had to be regulated with hydraulic controls, among other features. While doing all this, Hill had forgotten to release a new locking mechanism on the elevator and rudder controls.

The Boeing model was deemed, as a newspaper put it, “too much airplane for one man to fly.” The army air corps declared Douglas’s smaller design the winner. Boeing nearly went bankrupt.

Still, the army purchased a few aircraft from Boeing as test planes, and some insiders remained convinced that the aircraft was flyable. So a group of test pilots got together and considered what to do. What they decided not to do was almost as interesting as what they actually did. They did not require Model 299 pilots to undergo longer training. It was hard to imagine having more experience and expertise than Major Hill, who had been the air corps’ chief of flight testing. Instead, they came up with an ingeniously simple approach: they created a pilot’s checklist.

The military ended up ordering 13,000 of the planes, which became the B-17. Pilots still methodically walk around your Boeing or Airbus plane today, kicking the tires and checking off the take-off procedure. Even if they do it on iPads.

2017-05-11T17:32:59+00:00 February 18, 2013|Books, Mindfulness, Situation Awareness|