Home/Irrational Consistency

Irrational Consistency

Deeper differences on Ukraine

This is an important observation from Timothy Garton Ash the other day on Ukraine:

Russia’s strongman garners tacit support, and even some quiet plaudits, from some of the world’s most important emerging powers, starting with China and India.

What explains that?

What the west faces here is the uncoiling of two giant springs. One, which has been extensively commented upon, is the coiled spring of Mother Russia’s resentment at the way her empire has shrunk over the past 25 years – all the way back from the heart of Germany to the heart of Kievan Rus.

The other is the coiled spring of resentment at centuries of western colonial domination. This takes very different forms in different Brics countries and members of the G20. They certainly don’t all have China’s monolithic, relentless narrative of national humiliation since Britain’s opium wars. But one way or another, they do share a strong and prickly concern for their own sovereignty, a resistance to North Americans and Europeans telling them what is good for them, and a certain instinctive glee, or schadenfreude, at seeing Uncle Sam (not to mention little John Bull) being poked in the eye by that pugnacious Russian. Viva Putinismo!

This is a quite different matter than accusations Obama or the EU have lost credibility. Western elites often fail to grasp other powers take a very different view of events, regardless of our own current actions, and may work to counteract some of our preferred legal and political values. Oh sure, you might say, we know that, it’s obvious in principle …….except the evidence shows we frequently forget it.

For example, consider Merkel’s assertion that Putin has “lost his grip on reality.” It’s not that we misunderstand his view or perceptions or motivations, you see, he’s clearly just gone nuts. Loo-la. With tanks. Or has he? It’s particularly hard for many EU elites to understand, whose entire project for three generations has been to dilute or pool sovereignty.

There’s two lessons: 1) people actually find it extremely hard to see events from different viewpoints, all the more so when they have prior commitments, or confront evidence their own policy hasn’t worked, or when important values and taboos are at stake. There are countless examples of foreign policy crises worsened by miscommunication and wrong assumptions. It happens to the most brilliant statesmen and accomplished leaders. You have to take this into account in crises. Indeed, it’s no different from central bank officials trying to understand bond traders, and vice versa.

To take just a few pieces of evidence, fifty years of work in social psychology since Leon Festinger has shown people have remarkable ability to ignore information which is dissonant with their current view. Phillip Tetlock’s more recent work also shows the most prominent experts are most often hedgehog thinkers who know “one thing” and one perspective -and that the track record of most country experts and intelligence agencies (and markets) on foreign crises is woeful.

It’s not that alternative views are necessarily justified, or right, or moral: but ignoring their existence rarely helps. The most difficult thing to get right in crises is usually not the facts on the ground so much as the prior facts in your head.

2) The international system is just that: a system, with both balancing and amplifying feedback loops. But the human mind has a natural tendency to want to see things in a straightforward, linear way. I’ll come back to issues of system dynamics soon, as another major alternative to the simplistic ideas about decision-making that regularly lead people towards failure.

 

Lack of Fall-back Plans and Inertia

One of the most important traps that afflicts decision makers is a failure to generate enough alternatives. People often see things in purely binary terms – do X or don’t do X – and ignore other options which may solve the problem much better. They fail to look for alternative perspectives.

This is one kind of knock-on effect from the tendency of policymakers to ignore trade-offs that I mentioned in this post on intelligence failure last week,  To continue the point, one consequence of ignoring trade-offs is leaders frequently fail to develop any fallback options as well. And that can lead to trillion-dollar catastrophes.

The same factors that lead decision makers to underestimate trade-offs make them reluctant to develop fallback plans and to resist information that their policy is failing. The latter more than the former causes conflicts with intelligence, although the two are closely linked. There are several reasons why leaders are reluctant to develop fallback plans. It is hard enough to develop one policy, and the burden of thinking through a second is often simply too great. Probably more important, if others learn of the existence of Plan B, they may give less support to Plan A. .. The most obvious and consequential recent case of a lack of Plan B is Iraq. (from Why Intelligence Fails)

The need to sell a chosen option often blinds people to alternatives, and develops a life of its own. Policy choices pick up their own inertia and get steadily harder to change.

Leaders tend to stay with their first choice for as long as possible. Lord Salisbury, the famous British statesman of the end of the nineteenth century, noted that “the commonest error in politics is sticking to the carcasses of dead policies.” Leaders are heavily invested in their policies. To change their basic objectives will be to incur very high costs, including, in some cases, losing their offices if not their lives. Indeed the resistance to seeing that a policy is failing is roughly proportional to the costs that are expected if it does.

Decisions problems are pervasive, and you can’t really make sense of events unless you are alert to them.

2017-05-11T17:32:48+00:00 February 2, 2014|Decisions, Irrational Consistency, Perception, Security, Uncategorized|

Why Intelligence Fails

The latest Snowden revelations tell us even the Angry Birds have been enlisted by the US and UK spy agencies. The NSA and GCHQ are amassing too much information to even store, let alone sift through.

But the paradox is, as we’ve seen before, intelligence agencies typically draw the wrong conclusions from even near-perfect data.  More information does not necessarily mean better decisions.  There has been little evidence of tangible results from all the intercepts. The Angry Birds just go splat.

So why does intelligence fail? Part of the answer is , as I’ve argued recently, people have a naive belief in pristine and clear  “primary sources.” I’ve talked to very senior primary sources for years, and it takes skill and judgment not to be misled by even sincere discussions. Even the best source information and intercepts is often ambiguous, contradictory, noisy and sometimes misleading. People say one thing and then act completely differently under pressure. You have to be able to judge who and what to trust, and how they may change their views.

Even more importantly, decision-makers want to believe in a neater, simpler world.  Robert Jervis is a Columbia professor who has done extensive research into Why Intelligence Fails, including examination of past CIA failures on Iran and Iraq.

Policymakers say they need and want good intelligence. They do need it, but often they do not like it. They are also prone to believe that when intelligence is not out to get them, it is incompetent. Richard Nixon was only the most vocal of presidents in wondering how “those clowns out at Langley” could misunderstand so much of the world and cause his administration so much trouble.

The intelligence agencies make mistakes. But much of the fault also lies with the consumers of intelligence. Leaders expect intelligence to provide support for decisions they have already made.

The different needs and perspectives of decision makers and intelligence officials guarantee conflict between them. For both political and psychological reasons, political leaders have to oversell their policies, especially in domestic systems in which power is decentralized,and this will produce pressures on and distortions of intelligence. It is, then, not surprising that intelligence officials, especially those at the working level, tend to see political leaders as unscrupulous and careless, if not intellectually deficient, and that leaders see their intelligence services as timid, unreliable, and often out to get them. Although it may be presumptuous for CIA to have chiseled in its lobby “And ye shall know the truth and the truth will make you free,” it can at least claim this as its objective. No decision maker could do so, as the more honest of them realize.

Good intelligence often produces points which conflict with what policymakers want to hear.

Decision makers need confidence and political support, and honest intelligence unfortunately often diminishes rather than increases these goods by pointing to ambiguities, uncertainties, and the costs and risks of policies. In many cases, there is a conflict between what intelligence at its best can produce and what decision makers seek and need.

Policymakers, as we’ve seen, have an inherent tendency to ignore trade-offs.

For reasons of both psychology and politics, decision makers want not only to minimize actual value trade-offs but to minimize their own perception of them. Leaders talk about how they make hard decisions all the time, but like the rest of us, they prefer easy ones and will try to convince themselves and others that a particular decision is in fact not so hard. Maximizing political support for a policy means arguing that it meets many goals, is supported by many considerations, and has few costs. Decision makers, then, want to portray the world as one in which their policy is superior to the alternatives on many independent dimensions.

So this is the reality: even if you have the ability to monitor almost every phone conversation in the world, you will still likely be tripped up by the old problems of misperception and confirmation. People hear what they want to hear. Confirmation bias interferes so deeply with decision-making that you are risking serious trouble if you do not take specific steps to deal with it.

2017-05-11T17:32:48+00:00 January 28, 2014|Assumptions, Confirmation bias, Decisions, Irrational Consistency, Perception|

Decisions fail because most experts are not adaptive

I’ve been talking about adaptiveness in decision-making recently. Does this really matter in practice in finance, business or economics?

Yes, because so many expert predictions are so terrible, and people rely on them to make big decisions.

We’ve looked at Philip Tetlock’s work several times before, because it is such a massive problem for the research and asset management business, as well as policymakers. Tetlock’s conclusion is that “foxes” – people who take many angles and approaches to a problem – are better at prediction than “hedgehogs”, who relate everything to one single approach or view.

Nate Silver actually describes the issue more vividly than Tetlock himself, in his book The Signal and the Noise: Why So Many Predictions Fail-but Some Don’t.

Tetlock actually dared to look at the outcome of expert predictions. According to Silver,

Tetlock’s conclusion was damning. The experts in his survey—regardless of their occupation, experience, or subfield—had done barely any better than random chance, and they had done worse than even rudimentary statistical methods at predicting future political events. They were grossly overconfident and terrible at calculating probabilities: about 15 percent of events that they claimed had no chance of occurring in fact happened, while about 25 percent of those that they said were absolutely sure things in fact failed to occur. It didn’t matter whether the experts were making predictions about economics, domestic politics, or international affairs; their judgment was equally bad across the board.

This is what your investment performance, or business strategy, or career is most likely resting on. This is the hard truth which underlies what you read about the state of the US labor market, or US attitudes to Iran, or whether the ECB will cut rates. This is what lies behind the forecasts you read or the talking heads on CNBC you watch. There has to be a better way.

Worse still, the most prominent experts and talking heads were more likely to get it wrong. Take academic professors, for example. You get ahead in academia because of citations, rather than getting things right. You master a tiny subset of a field, rather than different interdisciplinary perspectives. Sweeping theories, methodological consistency and defending your turf count, and once you are tenured there is little or no pressure to change your mind or repudiate previous work. It is a hedgehog zoo.

The only way to have a better chance to get things is to be more like a fox. But there are very strong pressures in many areas to entrench, hedgehog-like, with a single point of view. They are usually areas where the penalty for making mistakes is low.

 

“Fox” means more adaptive

It should now be clear that all this is just another way of saying that foxes are more likely to have the right degree of adaptiveness. They are more likely to adjust their views in response to incoming evidence than hedgehogs. Hedgehogs are more likely to dismiss evidence that does not fit with their preconceptions.

I’ve argued that the single worst blindspot is confirmation bias. So one of the main reasons hedgehogs do worse is they are more likely to fall into that trap – seeing what you want to or expect to see, if it confirms or fits with your big theory – as well as overconfidence – believing they know more than they know. They are less likely to pay attention to anomalies.They are more likely to show irrational consistency.

Hedgehogs can often do better in static situations, where a deep, consistent, single view can pay dividends. But they struggle to adapt to situations where there is a lot of change, or where mistakes are deadly. Because hedgehogs don’t adapt, they lose out over time to foxes.

The trouble is you are probably surrounded by hedgehogs.

 

Overkill in policy decisions

I've been talking about how many failed decisions originate in denying trade-offs.

In fact, people often go to the other extreme and believe that their views solve all objectives at once. Given we are getting over recent Congressional dysfunction, a relevant example is one of the most powerful US Senators of the twentieth century, Arthur Vandenberg ( a Republican and former journalist.)

According to Columbia expert on policy misperception, Robert Jervis,

When a person believes that a policy contributes to one value, he is likely to believe it also contributes to several other values, even though there is no reason why the world should be constructed in such a neat and helpful manner. …

In opposing a proposal, many people fit Dean Acheson's description of Arthur Vandenberg's characteristic stand: “He declared the end unattainable, the means harebrained, and the cost staggering.” Belief systems thus often display overkill. (From Perception and Misperception In International Relations )

It is, of course, possible for a proposal to be harebrained and cheap. This tendency is one reason why policy sometimes moves from one extreme to another. People often adjust entire clusters of views instead of adjusting gradually, because internal consistency is more important than realism or accuracy.

Trade-offs can usually only be successfully done if you have a way to measure value and preferences of outcomes, as well as account for risks. The central banks gave very sophisticated forecast models to produce forecasts and what-if scenarios. There is also a literature on optional policy in relation to some welfare function.

But in practice “risk management” around an objective, or deciding how fast to try to meet an objective , tends to be based on seat-of-the-pants intuition. I once asked one of the most brilliant decision-makers in one of the most technically sophisticated central banks, which devoted weeks of intensive effort and thousands of person-hours to their forecast, how they decided on the balance of risks in the policy statement.

They just talk in the final meeting.

 

2017-05-11T17:32:51+00:00 October 24, 2013|Confirmation bias, Irrational Consistency, Monetary Policy, Perception|

How blindness to trade-offs causes policy breakdowns

One of the most important blind spots in policymaking is refusing to recognize trade-offs between different desirable objectives and approaches. Psychologists have found such denial is pervasive behavior in people, as I noted recently here.

In fact, it is a major reason communication of policy objectives is so hard in monetary policy. And as a result it is also one reason why markets and policymakers frequently misunderstand each other, with major consequences for bonds, equities and the world economy.

Let's take the surprising decsion by the Fed not to taper on September 18th.

Many academically-inclined policymakers advocate singular, clear objectives. And the market likes that because it makes policy more predictable. So everything appeared set for an expected Fed taper at that meeting.

Monetary policymakers are frequently suspicious of trade-offs. Much of the monetary discussion of the 1990s was about denying there was any short-run trade-off between inflation and unemployment (ie. the Phillips Curve,) for example.

Indeed, central bankers typically think that for each policy objective, you need a separate policy instrument. You can't hit two targets with one arrow. So if you have only one main instrument, the interest rate, you can only have one objective, such as inflation (This dates back to work done by Tinbergen. In practice, central banks have several other instruments, like reserve requirements or regulatory restrictions as well, although they often prefer not to use them.) This is one reason inflation targeting became so popular in the last twenty years.

But in practice, surprises happen frequently. And then the need for trade-offs suddenly reasserts itself. In practice, the Fed can't ignore different objectives, whatever it may say – and then it confuses the market.

In fact, monetary policy is beset with trade-offs, often hidden beneath the surface. One trade-off is between rules and discretion. Another is between policies that are effective in the short-term but potentially destructive in the long-term. Another is between monetary stability and (in some circumstances) financial stability.

“Technical” judgments often hide tradeoffs

In theory you can usually find an optimal balance between objectives, like moving along an indifference curve in microeconomics. But in practice people are more often tempted to forget or deny trade-offs instead.

In the case of central banks, that is partly because acknowledging them turns what appears to be a technical judgment for monetary experts into a political choice between different goals which may attract the attention of politicians, and involve preferences and ethics and values. By contrast, in a true technical judgment there is usually a single right answer, which does not involve ethical judgments.

The desire to make the choice appear purely technical and internally consistent can lead to all sorts of pathologies in practice. One desirable objective is frequently bought at the cost of ignoring another good objective. Or (even worse) people believe at some more fundamental or truer level there is “realy” a harmony between different objectives.

“Clarity” and “rigor” and “clear communication” and “consistency” can often just mean that a policymaker is concentrating on one objective to the exclusion of other goals. Then events force them to swing their focus to the other side.

As a result, strenuous claims about internal consistency are actually often a serious warning sign of expert misjudgment, because they frequently signify a denial of trade-offs or multiple objectives.

Rules shunted aside in favor of discretion

This can help explain what happened in September. The Fed focused on the advantage of rules – credibility and predictability – as it set up its communication for tapering. Then, as they sat around the table on September 17-18 faced with the specfic uncertainty of mixed data, a clouded fiscal outlook, and the potential for another nasty surprise in long rates as markets reacted to tapering, they remembered the virtues of discretion, the advantages of looking at the specific context. They wanted to act on the basis of immediate considerations, not predictability and principles and rules and longer-term explanations.

There is no single right answer to most of these trade-offs. There is a whole literature on rules versus discretion. And time inconsistency means it is often advantageous to claim allegiance to one objective , and then ignore it when the disadvantages are clear when it is time to act.

That is why communication will never be fully effective. Trade-offs and dilemmas and choices are often denied or hidden. That is why policymakers will often surprise themselves, let alone the markets. The other side of a trade-off can often be ignored for a while, but it has a tendency to spring back and bite you.

 

2017-05-11T17:32:51+00:00 October 18, 2013|Central Banks, Decisions, Federal Reserve, Irrational Consistency, Monetary Policy|

The dangers of consistency

I was talking in the last post about how academic economists , and policymakers who have been academic economists, often like to claim a model is neccessary to ensure internal consistency of ideas. In fact consistency, or coherence, can often be a trap. A view can be completely internally consistent, and also completely wrong.

Kahneman stresses coherence often serves as a misleading machine for jumping to conclusions. “Fast” or intuitive System 1 thinking loves coherence, and slower, reflective System 2 thinking is usually too lazy to correct it. As he says in Thinking, Fast and Slow:

System 1 excels at constructing the best possible story that incorporates ideas currently activated, but it does not (cannot) allow for information it does not have. The measure of success for System 1 is the coherence of the story it manages to create. The amount and quality of the data on which the story is based are largely irrelevant. When information is scarce, which is a common occurrence, System 1 operates as a machine for jumping to conclusions.

[..]

In the context of attitudes, however, System 2 is more of an apologist for the emotions of System 1 than a critic of those emotions— an endorser rather than an enforcer. Its search for information and arguments is mostly constrained to information that is consistent with existing beliefs, not with an intention to examine them. An active, coherence-seeking System 1 suggests solutions to an undemanding System 2.

He says there is a systemic tendency to confuse coherence with probability.

The most coherent stories are not necessarily the most probable, but they are plausible, and the notions of coherence, plausibility, and probability are easily confused by the unwary.

Worse still, strong coherence makes people overconfident about their view as well. Far from being a guarantee of a good decision, too much consistency can actually be a warning sign of a simplistic view or a failure to adapt to the situation in hand. Many decisions involve balancing or trade-offs between opposite and contradictory interests, after all. They are necessarily sometimes internally inconsistent.

Logical contradiction is fatal if you are proving a theorem in mathematics, of course. But in decisions trying to have it both ways or saying different things to different audiences can be a sign of skill , or at least low cunning, rather than irrationality.

2017-05-11T17:32:52+00:00 August 13, 2013|Decisions, Irrational Consistency|

Summer Stretchiness: how people resist changing their view

So here we are on the brink of August, which is either a month of complete torpor in markets, or occasionally crises break out like thunderous summer storms. Right now, it seems calm. I’ve just got back from traveling in Europe and out west in North America, and the major issues are the same they have been for months: when will the Fed begin tapering, and how will the market react? Will China be able to defer economic slowdown? Will Europe slip back into crisis?

All of these issues are not just matters of the data. They involve how long decision-makers can defer taking action, and how much they will react (or fail to react) to events.

In an ideal world, there would be a one-to-one relationship between clear evidence and the decisions people take as a result. In practice, that usually doesn’t happen.

The most important thing I always find I need to understand when talking to senior decision makers is how they shift and update their views. Economists and central bankers sometimes call it the “reaction function”, but they typically underestimate the issues involved.

The core of the problem is people don’t alter their views in a neat, smooth, rational way, especially when they have emotion or reputation invested in their current viewpoint.

In fact, there is often serious naivete about how people think through tough decisions. If it was simply a matter of correct analysis of the data alone, macro and market life would be much simpler. For one thing, central bankers usually think not so much about the raw data or information- payrolls, say – but why events are unfolding that way. And data is usually consistent with a number of different explanations. It’s rare that a single data point, or even a trend, can’t be explained away.

Then there’s the problems which come from just simply communicating a point of view. Markets frequently misunderstand what the Fed and other policymakers are trying to convey, because they have different assumptions and words can mean very different things to different groups. People misunderstand objectives, intentions and the trade-offs between conflicting goals.

An even deeper problem is people see what they want to see, and expect to see, until the evidence is almost overwhelming against it. Decision-makers update their views in an uneven, misshapen way. They talk past each other. They point to different, contradictory evidence. They seek illusory consistency and ignore facts which do not match their views. They have blind spots, where they simply can’t see or recognize problems.

This is a matter of common sense and common experience. Just think of debating politics with someone of very partisan and opposite views (your cousin or co-worker, say.) There is a slim to nil chance they will walk away convinced by anything you say.

Sometimes events do alter views, at least a little. The question is when and how.

I’d call this the “elasticity” of viewpoints, except it sounds a little too precise and a little too similar to price elasticity in standard economics.

So I call it the “stretchiness” of viewpoints and mindsets. It means one of the most important questions to ask about policy is “what evidence would change the decision-makers’ mind?” And fifty years of research into how people make decisions says people tend to get this wrong.

It’s not always what might appear to be the most important or dramatic information. It’s more often the things that shake faith in the underlying mental model which can’t be evaded or set aside.

That, of course, means you have to be very aware of what someone else’s underlying mental model and assumptions actually are. The trouble is that doesn’t come naturally to most people. Research shows people tend to be surprisingly unaware of how their own views fit together, and how they change, let alone anyone elses’.

In fact, the smarter and more able you are, the more likely you are to ingeniously find reasons to stick with your current view. The timing is wrong. Or there’s issues with the data. Or the plan hasn’t been implemented thoroughly enough. Or another set of data points to a different conclusion.

So as the big decisions loom in coming months, I’ll be asking how and why people shift their views, and where the blind spots are.

2013-11-22T12:29:09+00:00 July 29, 2013|Central Banks, Confirmation bias, Decisions, Irrational Consistency, Psychology|

The Tylenol Test

There is a certain cast of mind that finds it very difficult to grasp or accept that something may be a very good idea up to a point – but not beyond it.

Some people yearn for universal rules. If the tools of economics or quantitative methods or Marxism or sociology or Kantian ethics or government regulation have any usefulness, they believe, they ought to apply to everything, always. If there is a problem, the answer is always even more of the same.

But common sense says that’s not actually true. Two Tylenol pills will cure a headache. That does not mean taking forty pills at once will be even better.

Call that the Tylenol Test. What is good in small doses can kill you in excess.

You’ll see I often talk about the limits of things here, whether it’s big data or academic economics or the impact of monetary policy. That does not mean those things – and others like them – are not useful, or valuable, or sometimes essential. I support all of them extremely strongly in the right circumstances. They can clear confusion and cure serious headaches.

It’s not a binary good/bad distinction.

Instead, it’s the right medicine at the right time in the right amount that keeps you healthy and makes you ready to seize opportunities. (That echoes the ancient Aristotelian idea of the golden mean, of course.)

If you overdose, you’re in trouble. Maybe that’s a universal rule….

When the stopped clock is right

I talked in the post below how differences about policy or macro expectations are seldom resolved by data alone. It is rare that people change their minds or expectations because of a single piece of data (or even huge volumes of data).

As the great physicist Max Planck said, in his Scientific Autobiography ( a little pessimistically),

A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.

This is a much more general problem beyond science, as well. Here is one of the classic works about how leaders make decisions in international affairs:

Those who are right, in politics as in science, are rarely distinguished from those who are wrong by their superior ability to judge specific bits of information. .. Rather, the expectations and predispositions of those who are right have provided a closer match to the situation than those who are wrong. Robert Jervis, Perception and Misperception in International Politics

This is like the stopped clock problem, when of course the clock will be correct twice a day even if it never changes. People who are consistently, relentlessly bearish will be proven right sometimes, when the market plunges (and vice versa). Those who claim to have predicted the great crash in 2008 were often suddenly lucky to have their predisposition temporarily match the situation.

But waiting to be occasionally right is not good enough if you have to make successful decisions. Instead, you need to be much more explicit about testing how viewpoints and perspectives fit together, and more rigorous about finding ways to identify blindspots and disconfirm them. It doesn’t matter how much information you gather if you don’t use it to test your assumptions and predispositions.

 

2017-05-11T17:32:56+00:00 May 14, 2013|Decisions, Irrational Consistency, Perception, Psychology|