Home/Perception

Don’t assume you know “what the terrorists want”

It’s difficult not to be moved and angered by the horrific terrorist attacks in Paris last night.  My heart goes out to the victims and their families.

What should we do about it? Here’s one important point. There are already signs of a rush for everyone to apply their usual narratives about “why we were attacked” or “what the terrorists want.”

Most of it is half-baked nonsense or wishful thinking. One of the most insidious blind spots is mirror imaging. We have a tendency to assume others think in much the same way we do, especially when it is convenient for us to maintain our own preexisting view of things.  We look at others and have a tendency to see a reflection of our selves.

It takes most people microseconds to come up with potential rationales for action, or justification, in the same way the mind leaps to see meaningful patterns in clouds. It is far too easy to produce neat stories about people’s motivations. That leads to self-delusion and ineffectiveness.

It is actually extremely difficult to recognize motivation and rationales, often even when evidence is staring you in the face. One of the better recent analyses of “What ISIS Wants” was in the Atlantic. It says:

In December, The New York Times published confidential comments by Major General Michael K. Nagata, the Special Operations commander for the United States in the Middle East, admitting that he had hardly begun figuring out the Islamic State’s appeal. “We have not defeated the idea,” he said. “We do not even understand the idea.”

That is actually a note of wisdom. More usually, decision-makers charge ahead with stupendous overconfidence.

The Atlantic author goes on to argue that the religious motive is much more important than western analysts usually believe.  But instead, we try to assimilate their thinking into our own secularized world view with rational pursuit of objectives like money or political influence – to find reasons that make sense in our own terms.

The Atlantic argument may or may not be right, although to its great credit it actually seems to reflect first-hand knowledge. The important point is that this is a situation where evidence should count, actual in-depth knowledge of how they think in private (and not just statements intended for press coverage.)  What we would want in their own shoes, or what the terrorists could want or should want is beside the point. We need to stop the convenient narratives.

Another problem that applies here is facile analogies, including to previous episodes of terrorism or counterinsurgency. Just note that many of the postwar examples of terrorism were in essence decolonial problems, where a major power was attempting to keep some control over a somewhat different population.  Examples include France’s pain in Algeria, the Israel-Palestinian conflict, Northern Ireland, Malaya, and many others. In those situations terrorists might gain by provoking the major power into overreacting and alienating the local population. The costs of maintaining occupation would rise so much that they would pack up and go home.

Just be wary of that model, which unconsciously determines so much of the way experts tend to think about terrorism, in this case. It clearly doesn’t necessarily apply when the attacks are in the heart of the major power itself and there is no home to pack up and return to.

Of course, we need to understand only in order to take the most effective action to crush the terrorists. Wishful thinking won’t help prevent the next attack.

2015-11-14T09:24:17+00:00 November 14, 2015|Mirror Imaging, Perception|

Why reporting the facts on Greece does not produce value

Let’s step back a bit from the immediate Greek vote, which is happening today. How did we get to this?  There is no good outcome, now matter whether the vote is “yes” or “no.” As one opinion piece by a Greek writer in the Guardian puts it, it’s a choice between “catastrophe or absolute catastrophe.” It seems every year brings another financial disaster or crisis of one kind or another, and it’s been like that for a generation.

Why? Think about this: It has not been  a matter of getting the facts wrong. The basic facts of the situation have rarely been in dispute. No-one has deliberately concealed the relevant economic theories, the interests of the parties, or the choices at issue. Even  if some Greek statistics have been distorted or falsified in the past, that self-delusion is largely past.

Just like the run-up to the 2008 financial crisis, most of the problems have been in plain view for years. Instead,  people choose to ignore or discount the facts, and talk past each other when it comes to arguments and narratives. Decision-makers get stuck into ruts and find it difficult to change course.

Most major mistakes do not come about because people don’t have access to facts, or can’t talk to the different parties. Instead, people see things in very different ways despite the facts, and it often takes the most dramatic or final losses to shake their conviction that they are right. By which time it is too late, like today.

That means a standard journalistic account , in a the way a good wire service reporter might file a story laying out the facts, is actually of surprisingly little value at this point, although many investors will be glued to their Bloomberg or Reuters screens today.  There’s always people with a sudden insatiable need for facts about electoral trends by region, or what party lieutenants say, or expert opinion, despite overwhelming recent evidence that political reporting is little or no use in practice. Yes, a scoop like finding out exit poll results an hour early would be a good story. But those journalistic instincts to break news are often a distraction from understanding the underlying game.  A good hard-news story and finding value in a situation are two very different things.

The facts about Apple

Unfortunately, that is a very difficult thing for some people to understand. There is a kind of mindset which is deeply uncomfortable with things that are not hard, uncontested fact. Everything which is not sure or verified by several sources is “opinion” or “speculation” or “vague”.  But in business the most valuable things are often matters of judgment or taste.

Take Apple, for instance. It is rarely at the cutting edge of technology. Its software engineering is often notoriously shoddy, as anyone who downloaded the bug-infested mess known as iOS 8 initially found out. Competitors often sell products with similar capabilities for much less money.  Apple is not known for financial brilliance in the way it invests its over $194 billion of cash reserves. But it is the most valuable company in the world. Why?

Of course, you can reduce it to discounted streams of future income or brand value or current market positioning, but what produces the income or the brand or Steve Job’s notorious “reality distortion field”  in the first place?

It’s largely matters of judgement and taste – a feel for ease of use and style, an ability to look beyond engineering features to put together a combination of things which delight users. And here’s an interesting point – it’s precisely because those intangibles are very hard to replicate that they produce excess value which isn’t immediately competed away.

In the same way, dozens of places around the world have tried to be the next Silicon Valley, spending billions of dollars and hiring very smart engineers. But the particular combination of culture and attitude and interpersonal connections that power the Valley is very hard to replicate in Malaysia or Russia or France. It’s not reducible to easily reported fact. Can you really imagine reading a story in the newspaper tomorrow which reveals a stunning new fact which completely explains the secret of Apple’s success? It’s not that easy.

That’s also why journalism is in such trouble as a business, in contrast to Apple. There’s very little value in information and basic facts, with rare exceptions like a legitimate tradable scoop. Facts are instantaneously replicable,  Even if it is a legitimate news story a scoop has almost no cash value unless you can react within a few milliseconds before algorithms trade. Instead, value is mostly in processes and patterns and culture and combinations which are not easy to replicate.

The real sources of value are not revelations or scoops. It’s about recognition and understanding, and persuasion, and creativity, and synthesis, and a sense for what works.  It’s about alertness and adaptability. None of that is reducible to easily reported fact.

Of course judgment is fallible, and particular kinds of narrowly-defined judgment are better turned over to algorithms. (I’ve discussed that, as well as the pros and cons of big data and linear models at length before.)  But hard-nosed facts tends to lose out where it most counts for business – hard cash.

 

2017-05-11T17:32:41+00:00 July 5, 2015|Europe, Perception|

How people think about Ebola

Isn’t it strange how emotive and ethically high-strung the debate about Ebola has become? Much of the press is flinging accusations about “hysteria” and quarantine rules have led to vicious partisan exchanges. I think it’s better to step back here and ask why epidemiology should have become such a moralized partisan issue. There’s some obvious blind spots here.

Liberals are enraged at the thought of quarantine and travel restrictions, while conservatives have been much quicker to embrace them. Why?  I think it is because of the central importance of the notion of “fairness” in politics.  According to Jonathan Haidt’s fascinating research,  people are sensitive to different moral considerations in much the way they have different taste buds in the tongue,  like sweet or salty. Haidt identifies five (later six) moral taste buds.  Liberals perceive issues almost entirely in terms of just two: care-harm and fairness-equality. Conservatives are receptive to those moral “tastes” but also pick up other values – authority,  loyalty, and sanctity, which are more adapted to group cohesion.  In fact, most people in most global cultures perceive the wider spectrum of moral considerations, perhaps because they have had adaptive value in traditional societies over long spans of time.

This is from an NYT review of Haidt’s  research, but you should read his whole book,The Righteous Mind: Why Good People Are Divided by Politics and Religion.

To the question many people ask about politics — Why doesn’t the other side listen to reason? — Haidt replies: We were never designed to listen to reason. When you ask people moral questions, time their responses and scan their brains, their answers and brain activation patterns indicate that they reach conclusions quickly and produce reasons later only to justify what they’ve decided.

Think about what this means for how people make and anticipate policy decisions. Both sides of the partisan divide tend to talk past each other.

Haidt started out as very liberal, but experiences such as living in India persuaded him that different cultures and people saw things in different ways.

The hardest part, Haidt finds, is getting liberals to open their minds. Anecdotally, he reports that when he talks about authority, loyalty and sanctity, many people in the audience spurn these ideas as the seeds of racism, sexism and homophobia. And in a survey of 2,000 Americans, Haidt found that self-described liberals, especially those who called themselves “very liberal,” were worse at predicting the moral judgments of moderates and conservatives than moderates and conservatives were at predicting the moral judgments of liberals. Liberals don’t understand conservative values. And they can’t recognize this failing, because they’re so convinced of their rationality, open-mindedness and enlightenment.

Haidt isn’t just scolding liberals, however. He sees the left and right as yin and yang, each contributing insights to which the other should listen.

So what has this to do with Ebola? The issue could almost be designed to cleave along this moral perception fracture. Liberals perceive quarantine or restrictions on returning medical personnel or West African visa applicants as highly unfair to the individuals concerned. They are not as receptive to considerations of protecting a particular country or territory from the virus, which is the main focal point for conservatives.  Furthermore, people in general of all persuasions have a hard-time perceiving or acknowledging trade-offs between different values and objectives. In practice, liberals are unwilling to trade even a small amount of fairness for other values, because they believe they don’t have to make a choice.  Hence loosening quarantine restrictions on returning healthcare workers is assumed not to make a disease outbreak in the US more likely, because fair must also be effective. That is a big assumption.

There’s other problems here I’ll come to , including the nature of expertise and the problem of low-probability high-impact risks. Conservatives have other problems I’ll return to.

But suppose you’re a liberal reading this. Do you have to change your view or concede the other side is right? No. Believe what you want, as ardently as you want, and you can think the other side is dumb. But here’s the real point. If you have to make actual decisions,  instead of just rhetorical positions, then whatever your partisan convictions, you can’t expect your particular viewpoint is going to be right every single time. No-one is made with an automatic hotline to god-like omnipotent truth. So set up a few markers for yourself that help tell you when you should reexamine the evidence or change your mind. Just for yourself, have  few guardrails that help you recognize contrary evidence when it doesn’t fit in with your natural instincts or assumptions.

It’s because people tend to instinctively perceive ethical choices in certain ways and then invent reasons to justify their choice that this kind of quasi ethical fight about public policy can get so hard to solve – and hugely dangerous assumptions can get overlooked. So long as you have something to lose if you’re wrong, it helps to understand where the other side is coming from.

 

 

2017-05-11T17:32:42+00:00 October 29, 2014|Assumptions, Confirmation bias, Current Events, Perception, Security|

Markets are Complex Systems – but most people don’t get that

One of the most successful investors of recent times has been Howard Marks. I took a look at his book about markets here. You can’t outperform if you don’t think better.

Thus, your thinking has to be better than that of others—both more powerful and at a higher level. Since other investors may be smart, well-informed and highly computerized, you must find an edge they don’t have. You must think of something they haven’t thought of, see things they miss or bring insight they don’t possess. You have to react differently and behave differently. In short, being right may be a necessary condition for investment success, but it won’t be sufficient. You must be more right than others . . . which by definition means your thinking has to be different.

First-level thinking, he says, is just having an opinion or forecast about the future. Second-level thinking, on the other hand, takes into account expectations, and the range of outcomes, and how people will react when expectations turn out to be wrong. Second-level thinkers are “on the alert for instances of misperception.”

Here’s a parallel. Marks doesn’t put it this way, but in essence it’s a matter of seeing markets as a nonlinear adaptive system, in the sense I was talking about in the last post. Second-level thinking is systems thinking.  Instead of linear straight lines,  markets react in complex feedback loops which depend on the existing stock of perception ( i.e expectations). Some of the greatest market players have an instinctive feel for this. But because of the limits of the human mind when it comes to complex systems,  most people have a great deal of trouble understanding markets.

That includes many mainstream economists. One obvious reason is price and price changes are one of the most important feedback loops in markets, but not the only feedback loop. A deeper reason is that most academics tend to be hedgehogs, interested in universal explanatory theories and linear prediction and “one big thing.”  But complex systems frustrate and falsify universal theories, because they change. The dominant loop changes, or new loops or added, or new players or goals change the nature of the system.

There’s another implication if you have a more systems-thinking view of markets. Complex adaptive systems are not predictable in their behavior. This, to me,  is a deeper reason for the difficulty of beating the market than efficient market theory. It isn’t so much that markets are hyper-efficient information processors that instantaneously adjust, as the fact they are complex. So consistent accurate prediction of their future state is impossible. It isn’t so much that markers are clearly mysteriously prone to statistically improbable 100- or 1000-year risks happening every 10 years. It’s that markets evolve and change, and positive feedback loops can take them into extreme territory with breathtaking speed that makes their behavior stray far from norms and equilibria.

“Tail Risks” are not the far end of a probability distribution,  as standard finance theory and policy thinking believes. They are positive feedback loops: cascades of events feed back on each other and change the behavior of the underlying system.  It’s not variance and volatility and fat-tailed distributions, but a matter of stocks and flows and feedback,  and tipping points which shift the dominant loop, and the underlying structure and changing relationship between components.

This view also helps understand why markets and policy resist change and stay in narrow stable ranges for long periods. Balancing feedback loops tend to kick in before long, producing resistance and inertia and cycles and pendulums, and making “this time it’s different” claims frequently a ticket to poverty.  Delays and time effects and variable  lags and cumulative effects matter profoundly in a way that simply doesn’t show up in linear models. Differential survival means evolutionary selection kicks in, changing behavior.

How can you make money if you can’t predict the future in complex systems, then? It’s clearly possible. Marks is a dazzlingly successful investor whose core belief is to be deeply skeptical of people who think they can make accurate predictions.

Awareness of the limited extent of our foreknowledge is an essential component of my approach to investing. I’m firmly convinced that (a) it’s hard to know what the macro future holds and (b) few people possess superior knowledge of these matters that can regularly be turned into an investing advantage.

You might be able to  know more than others about a single company or security, he says. And you can figure out where we might be in a particular cycle or pendulum. But broad economic forecasts and predictions are essentially worthless. Most forecasting is just extrapolation of recent data or events, and so tends to miss the big changes that would actually help people make money..

One key question investors have to answer is whether they view the future as knowable or unknowable. Investors who feel they know what the future holds will act assertively: making directional bets, concentrating positions, levering holdings and counting on future growth—in other words, doing things that in the absence of foreknowledge would increase risk. On the other hand, those who feel they don’t know what the future holds will act quite differently: diversifying, hedging, levering less (or not at all), emphasizing value today over growth tomorrow, staying high in the capital structure, and generally girding for a variety of possible outcomes.

In other words, a belief in prediction tends to go with a belief in making overconfident, aggressive big bets, sometimes being lucky –  and then flaming out. The answer? Above all, control your risks, Marks says. Markets are a “loser’s game”, like amateur tennis. It’s extremely hard to hit winners. Instead, avoid hitting losers. Make sure you have defense as as well as offense.

Offense is easy to define. It’s the adoption of aggressive tactics and elevated risk in the pursuit of above-average gains. But what’s defense? Rather than doing the right thing, the defensive investor’s main emphasis is on not doing the wrong thing.

Thinking about what can go wrong is not purely negative, however. It’s not a matter of being obsessed with biases. Instead, it’s a way to be more creative and agile in adapting to change. If markets are complex systems, the key, as Herbert Simon puts it, is not prediction but “robust adaptive procedures.”

To stress the point again – people don’t intuitively understand systems. And many of our analytics and standard theories get them even less.  But it’s the way markets and policy work.

 

2017-05-11T17:32:43+00:00 August 9, 2014|Decisions, Human Error, Investment, Market Behavior, Perception, Risk Management|

System Blindness

Good news: GDP grew at 4% and the winter surprise has faded. As usual, there is endless analysis available for free. These days we swim in a bottomless ocean of economic commentary.

Let’s turn to something that might give people an edge in making decisions instead.  One of the main reasons people and companies get into trouble is that they don’t think in terms of systems. I noted one major source of this approach was Jay Forrester’s work at MIT beginning in the 1960s. His successor at MIT is John Sterman, who calls it system blindness.

Sterman documents the multiple problems that decision-makers  have  dealing with dynamic complexity in his best-known shorter paper. We haven’t evolved to deal with complex systems, he says. Instead, we are much quicker to deal with things with obvious, direct, immediate, local causes. (See bear. Run.)

So people have inherent deep difficulties with feedback loops, for example.

Like organisms, social systems contain intricate networks of feedback processes, both self-reinforcing (positive) and self-correcting (negative) loops. However, studies show that people recognize few feedbacks; rather, people usually think in short, causal chains, tend to assume each effect has a single cause, and often cease their search for explanations when the first sufficient cause is found. Failure to focus on feedback in policy design has critical consequences.

As a result, policies and decisions often become actively counterproductive, producing unexpected side-effects and counter-reactions. Such ‘policy resistance’ means major decisions frequently have the opposite effect to that intended (such as building major roads but producing even more congestion, or suppressing forest fires and producing much bigger blazes.)

People also have serious problems understanding time and delays, which often leads to oversteer at the wrong times and wild oscillation and swings.  They have difficulty with short-term actions that produce long-term effects. They assume causes must be proportionate to effects. (Think of the long and variable lags in monetary policy, and the tendency to oversteer.)

Decision-makers have problems with  stocks and flows. In essence, a stock is the water in the bath. A flow is the water running from the tap.

People have poor intuitive understanding of the process of accumulation. Most people assume that system inputs and outputs are correlated (e.g., the higher the federal budget deficit, the greater the national debt will be). However, stocks integrate (accumulate) their net inflows. A stock rises even as its net inflow falls, as long as the net inflow is positive: the national debt rises even as the deficit falls—debt falls only when the government runs a surplus; the number of people living with HIV continues to rise even as incidence falls—prevalence falls only when infection falls below mortality. Poor understanding of accumulation has significant consequences for public health and economic welfare.

People also fail to learn from experience, especially in groups. They don’t test beliefs. Instead, they see what they believe, and believe what they see. They use defensive routines to save face. They avoid testing their beliefs, especially in public.

Note that these are not the problems that are getting prime attention in behavioral economics., let alone mainstream economics. Why don’t system ideas get more attention? Sterman notes that more generally, people often fail to learn from hard evidence.

More than 2 and one-half centuries passed from the first demonstration that citrus fruits prevent scurvy until citrus use was mandated in the British merchant marine, despite the importance of the problem and unambiguous evidence supplied by controlled experiments.

For me, one additional major reason might be we are generally so used to the analytic approach: i.e.  break things down into their component parts and examine each separately. This has worked extremely well for decades in science and business, when applied to things which don’t change and adapt all the time. Instead, systems thinking is about looking at the interaction between elements. It is synthesis, “joining the dots”, putting the pieces together and seeing how they work and interrelate in practice.

And that might be an additional explanation for the hedgehog versus fox distinction. You recall the fundamentally important research that finds that foxes, “who know many things”, outperform hedgehogs “who know one big thing”  at prediction and decision. Hedgehogs are drawn more to analysis and universal explanation; foxes are drawn more to synthesis and observation.

As a result, hedgehogs have much greater difficulty with system thinking. Foxes are more likely to recognize and deal with system effects. If you confront a complex adaptive system (like the economy or financial markets,) that gives foxes an edge.

 

 

Deeper differences on Ukraine

This is an important observation from Timothy Garton Ash the other day on Ukraine:

Russia’s strongman garners tacit support, and even some quiet plaudits, from some of the world’s most important emerging powers, starting with China and India.

What explains that?

What the west faces here is the uncoiling of two giant springs. One, which has been extensively commented upon, is the coiled spring of Mother Russia’s resentment at the way her empire has shrunk over the past 25 years – all the way back from the heart of Germany to the heart of Kievan Rus.

The other is the coiled spring of resentment at centuries of western colonial domination. This takes very different forms in different Brics countries and members of the G20. They certainly don’t all have China’s monolithic, relentless narrative of national humiliation since Britain’s opium wars. But one way or another, they do share a strong and prickly concern for their own sovereignty, a resistance to North Americans and Europeans telling them what is good for them, and a certain instinctive glee, or schadenfreude, at seeing Uncle Sam (not to mention little John Bull) being poked in the eye by that pugnacious Russian. Viva Putinismo!

This is a quite different matter than accusations Obama or the EU have lost credibility. Western elites often fail to grasp other powers take a very different view of events, regardless of our own current actions, and may work to counteract some of our preferred legal and political values. Oh sure, you might say, we know that, it’s obvious in principle …….except the evidence shows we frequently forget it.

For example, consider Merkel’s assertion that Putin has “lost his grip on reality.” It’s not that we misunderstand his view or perceptions or motivations, you see, he’s clearly just gone nuts. Loo-la. With tanks. Or has he? It’s particularly hard for many EU elites to understand, whose entire project for three generations has been to dilute or pool sovereignty.

There’s two lessons: 1) people actually find it extremely hard to see events from different viewpoints, all the more so when they have prior commitments, or confront evidence their own policy hasn’t worked, or when important values and taboos are at stake. There are countless examples of foreign policy crises worsened by miscommunication and wrong assumptions. It happens to the most brilliant statesmen and accomplished leaders. You have to take this into account in crises. Indeed, it’s no different from central bank officials trying to understand bond traders, and vice versa.

To take just a few pieces of evidence, fifty years of work in social psychology since Leon Festinger has shown people have remarkable ability to ignore information which is dissonant with their current view. Phillip Tetlock’s more recent work also shows the most prominent experts are most often hedgehog thinkers who know “one thing” and one perspective -and that the track record of most country experts and intelligence agencies (and markets) on foreign crises is woeful.

It’s not that alternative views are necessarily justified, or right, or moral: but ignoring their existence rarely helps. The most difficult thing to get right in crises is usually not the facts on the ground so much as the prior facts in your head.

2) The international system is just that: a system, with both balancing and amplifying feedback loops. But the human mind has a natural tendency to want to see things in a straightforward, linear way. I’ll come back to issues of system dynamics soon, as another major alternative to the simplistic ideas about decision-making that regularly lead people towards failure.

 

Prediction is stupid. Instead, look for ways to adapt faster

I was talking the other day about Herbert Simon, easily the most important thinker on decisions of the last century. One of his most important books, The Sciences of the Artificial, talks about the differences between natural science and the approach needed for dealing with the artificial , i.e  man-made word.

There are a hundred important points in what he says in this one book, but I’ll stick to just a few. First,  if you have an uncertain situation, it is much better to find better ways to adapt, rather than predict or optimize in the classic economic way.

 Although the presence of uncertainty does not make intelligent choice impossible, it places a premium on robust adaptive procedures instead of optimizing strategies that work well only when finely tuned to precisely known environments.   A system can generally be steered more accurately if it uses feedforward, based on prediction of the future, in combination with feedback, to correct the errors of the past. However, forming expectations to deal with uncertainty creates its own problems. Feedforward can have unfortunate destabilizing effects, for a system can overreact to its predictions and go into unstable oscillations. Feedforward in markets can become especially destabilizing when each actor tries to anticipate the actions of the others (and hence their expectations).

Of course, that describes the situation of most central bank decisions. Everyone knows feedforward – economic forecasts  – are highly unreliable. Overreacting to those forecasts can produce “unstable oscillations.” Consider the history of the American economy in the last fifteen years. It means the standard inflation-forecast rule prevalent in central banks is deeply flawed, but in interesting ways.

This also has major implications for how my company – Alucidate – looks for value. Think of things in evolutionary terms. The most successful surviving firms and investors are not those that predict the future better, any more than natural evolution proceeds by having squirrels forecast nut production in 2016 or 2100. It would be nice if people really could predict successfully, but just about every scrap of evidence we have suggests they can’t.  You can ignore that, or creatively deal with it. (Or stockpile a lot of nuts.)

Instead, what decision-makers can do is find ways to adapt faster, to find “robust adaptive procedures..” You recognize changes in the environment and work out ways to take advantage of them.  It is a question of robust search capacity (and variation, and selection). The fundamental challenge is recognition and awareness and resilience, not prediction.  If you adapt, you survive and thrive.

Simon also thinks in terms of nonlinear systems , with feedback loops, rather than a standard linear economic model.  The neoclassical equilibrium model is far too simplistic.

In sum, our present understanding of the dynamics of real economic systems is grossly deficient. We are especially lacking in empirical information about how economic actors, with their bounded rationality, form expectations about the future and how they use such expectations in planning their own behavior. … In face of the current gaps in our empirical knowledge there is little empirical basis for choosing among the competing models currently proposed by economics to account for business cycles, and consequently, little rational basis for choosing among the competing policy recommendations that flow from those models. (from the 3rd edition, 1996)

(I’ll pursue more recent work on this theme later.)

What’s worse, 80% of American economic activity takes place within organizations, he says, so looking at the economy simply in terms of markets is inadequate. You have to understand how organizations make decisions as well, and when and why the coordination abilities of organizations mop up activity from open markets. One critical factor is “identification”, or people’s loyalty to organizational aims.

It all means that thinking in terms of reactions and feedback and the pace of learning in organizations is essential to understand both the economy and how economic decisions really get made. But economic research in general does not do that.

Instead of putting analysis of economic data and forecasts first, we need to focus on decision-making in practice.

 

Figuring out policy failure

Here's a very nice post on policy failure by Megan McCardle. She interviewed a political scientist at John Hopkins who has developed a class on policy disasters:

Megan McArdle: So, first question: What is this class about?Steve Teles: The class is about large-scale, negative policy consequences — that is, what explains why policies sometimes work very, very differently than their authors intended. [ ..]I thought that reading about big policy mistakes of the past might help these master's students develop what my old friend Marty Levin from Brandeis University calls a “dirty mind” — an ability to see around corners, which is what sometimes economist-trained people can't, especially about the operation of human organizations.

“An ability to see around corners” is a wonderful way to put it. I was originally trained as an economist myself, of course, but to make sense of most policy problems these days you have to be able to think in more than one gear. Hedgehogs, who want one internally consistent, deductive, elegant approach tend to end up making serious mistakes in actual policy, as the research on expert judgment shows.

 

2014-02-09T10:04:38+00:00 February 6, 2014|Decisions, Foxes and Hedgehogs, Perception|

Lack of Fall-back Plans and Inertia

One of the most important traps that afflicts decision makers is a failure to generate enough alternatives. People often see things in purely binary terms – do X or don’t do X – and ignore other options which may solve the problem much better. They fail to look for alternative perspectives.

This is one kind of knock-on effect from the tendency of policymakers to ignore trade-offs that I mentioned in this post on intelligence failure last week,  To continue the point, one consequence of ignoring trade-offs is leaders frequently fail to develop any fallback options as well. And that can lead to trillion-dollar catastrophes.

The same factors that lead decision makers to underestimate trade-offs make them reluctant to develop fallback plans and to resist information that their policy is failing. The latter more than the former causes conflicts with intelligence, although the two are closely linked. There are several reasons why leaders are reluctant to develop fallback plans. It is hard enough to develop one policy, and the burden of thinking through a second is often simply too great. Probably more important, if others learn of the existence of Plan B, they may give less support to Plan A. .. The most obvious and consequential recent case of a lack of Plan B is Iraq. (from Why Intelligence Fails)

The need to sell a chosen option often blinds people to alternatives, and develops a life of its own. Policy choices pick up their own inertia and get steadily harder to change.

Leaders tend to stay with their first choice for as long as possible. Lord Salisbury, the famous British statesman of the end of the nineteenth century, noted that “the commonest error in politics is sticking to the carcasses of dead policies.” Leaders are heavily invested in their policies. To change their basic objectives will be to incur very high costs, including, in some cases, losing their offices if not their lives. Indeed the resistance to seeing that a policy is failing is roughly proportional to the costs that are expected if it does.

Decisions problems are pervasive, and you can’t really make sense of events unless you are alert to them.

2017-05-11T17:32:48+00:00 February 2, 2014|Decisions, Irrational Consistency, Perception, Security, Uncategorized|

Why Intelligence Fails

The latest Snowden revelations tell us even the Angry Birds have been enlisted by the US and UK spy agencies. The NSA and GCHQ are amassing too much information to even store, let alone sift through.

But the paradox is, as we’ve seen before, intelligence agencies typically draw the wrong conclusions from even near-perfect data.  More information does not necessarily mean better decisions.  There has been little evidence of tangible results from all the intercepts. The Angry Birds just go splat.

So why does intelligence fail? Part of the answer is , as I’ve argued recently, people have a naive belief in pristine and clear  “primary sources.” I’ve talked to very senior primary sources for years, and it takes skill and judgment not to be misled by even sincere discussions. Even the best source information and intercepts is often ambiguous, contradictory, noisy and sometimes misleading. People say one thing and then act completely differently under pressure. You have to be able to judge who and what to trust, and how they may change their views.

Even more importantly, decision-makers want to believe in a neater, simpler world.  Robert Jervis is a Columbia professor who has done extensive research into Why Intelligence Fails, including examination of past CIA failures on Iran and Iraq.

Policymakers say they need and want good intelligence. They do need it, but often they do not like it. They are also prone to believe that when intelligence is not out to get them, it is incompetent. Richard Nixon was only the most vocal of presidents in wondering how “those clowns out at Langley” could misunderstand so much of the world and cause his administration so much trouble.

The intelligence agencies make mistakes. But much of the fault also lies with the consumers of intelligence. Leaders expect intelligence to provide support for decisions they have already made.

The different needs and perspectives of decision makers and intelligence officials guarantee conflict between them. For both political and psychological reasons, political leaders have to oversell their policies, especially in domestic systems in which power is decentralized,and this will produce pressures on and distortions of intelligence. It is, then, not surprising that intelligence officials, especially those at the working level, tend to see political leaders as unscrupulous and careless, if not intellectually deficient, and that leaders see their intelligence services as timid, unreliable, and often out to get them. Although it may be presumptuous for CIA to have chiseled in its lobby “And ye shall know the truth and the truth will make you free,” it can at least claim this as its objective. No decision maker could do so, as the more honest of them realize.

Good intelligence often produces points which conflict with what policymakers want to hear.

Decision makers need confidence and political support, and honest intelligence unfortunately often diminishes rather than increases these goods by pointing to ambiguities, uncertainties, and the costs and risks of policies. In many cases, there is a conflict between what intelligence at its best can produce and what decision makers seek and need.

Policymakers, as we’ve seen, have an inherent tendency to ignore trade-offs.

For reasons of both psychology and politics, decision makers want not only to minimize actual value trade-offs but to minimize their own perception of them. Leaders talk about how they make hard decisions all the time, but like the rest of us, they prefer easy ones and will try to convince themselves and others that a particular decision is in fact not so hard. Maximizing political support for a policy means arguing that it meets many goals, is supported by many considerations, and has few costs. Decision makers, then, want to portray the world as one in which their policy is superior to the alternatives on many independent dimensions.

So this is the reality: even if you have the ability to monitor almost every phone conversation in the world, you will still likely be tripped up by the old problems of misperception and confirmation. People hear what they want to hear. Confirmation bias interferes so deeply with decision-making that you are risking serious trouble if you do not take specific steps to deal with it.

2017-05-11T17:32:48+00:00 January 28, 2014|Assumptions, Confirmation bias, Decisions, Irrational Consistency, Perception|