Confirmation bias

/Confirmation bias

confirmation bias

A Thousand Years since the election (or so it feels)

So this is what warp drive must feel like. Doesn’t it seem as if we’ve had enough drama and high-pitched emotion since the election to have punched a hole in normal reality?  Emotions have been running very high things are getting a little distorted in this stretch of the universe. People I know on both sides are saying shocking things, more extreme than I would have ever imagined them saying.
Time to calm things down. One of the danger signs of trouble is when politics starts to become too moralized, a kind of substitute for religion. Public debate should not be a war raged over ultimate right and wrong.  That didn’t work out too well in the Wars of Religion in 17th century Europe, or the clash of ideologies in the 20th century. Indeed, a desire to calm such disputes is the reason we have supposedly secular (“post-Westphalian”) nation states  and freedom of religion in the West today. It’s not because of deep respect for faith. Instead, it turned out insisting on one correct belief had the unfortunate tendency to produce millions of corpses.  People unfortunately have an inherent tendency to get pig-headed and extreme (and hypocritical)  if they think they are guardians of all that is right and true.
Instead, a better question is: does a policy work?  Instead of saving the souls of the poor, does it at least feed them or educate them? If a plan doesn’t work, why not? If it works, are we becoming complacent?
In the same way, if you’re unhappy with Trump, what could Obama or Clinton have done differently? Why has the Davos order become so unpopular with many groups? If you’re happy with Trump, where’s he most likely to mess up?
That might be less emotionally satisfying than talking about noble principles. But it also means more attention to potential opportunities or practical problems you can fix. Success is not usually a matter of pushing harder for ultimate truth and light, or final victory for one side or the other. Instead, people fail to notice things, or wish them away. They have blind spots.  If Clinton had talked a little less about “who we are” in a general way but visited Wisconsin more often, things might be different. So why did she and almost all her experts and poillsters and big-data modelers fail to see that? Solve that one and it will probably help the Democrats more than any number of protests.
More ability to notice unwelcome things is usually more important in making things work than being righteous. It’s usually complacency and hubris that trip people up, rather than having the correct universal moral rule or ideal policy or narrative.
If you’re upset about Trump, that means you need less comforting validation of what a fine and superior person you are, how dumb and terrible the other side is, and more ways to figure out what your side did wrong, and how to fix it. That’s not easy. Trump opponents think he is pure id, dark malevolent instinct (and maybe they are right!)
No doubt Trump will be just as blind, if not more so. He will blunder into problems he doesn’t see coming, and fail to listen to people with a different perspective.  For Trump supporters, mad at the liberal media (and maybe they are right!) how do you actually restore trust in institutions instead of breaking them down? How do make change more permanent, instead of following the media cycle?
Above all, getting policy right usually does not mean bull-headed persistence and triumph of the will. It  means being able to see things from different viewpoints, so you can find blindspots or oversights or a need to adjust to circumstances instead of confirmation you are so marvelous and right. And that is more difficult than just about anything, as the extreme emotional pitch right now shows.
By | February 8, 2017|Confirmation bias, Current Events, Politics|Comments Off on A Thousand Years since the election (or so it feels)

US election shock: You’ll forget the models were wrong within a few weeks.

If there's one thing I've consistently argued on this blog, it's that predictions are usually a waste of time and money. Instead, test your assumptions. Don't just “make assumptions explicit.” Look for how you might be wrong, because then you can do something about it.

So how did that play out, the morning after the US Presidential election? Leave aside your horror or elation. This isn't a partisan point. No matter what your politics or feelings about the result, there's a pattern of bad decisions and misjudgment here. And everyone will also forget that pattern within weeks.

Reuters:

With hours to go before Americans vote, Democrat Hillary Clinton has about a 90 percent chance of defeating Republican Donald Trump in the race for the White House, according to the final Reuters/Ipsos States of the Nation project.

The Huffington Post put Clinton's chances at 98%. (98%!)

The HuffPost presidential forecast model gives Democrat Hillary Clinton a 98.2 percent chance of winning the presidency. Republican Donald Trump has essentially no path to an Electoral College victory.

Huffpo also rather sneeringly attacked Nate Silver's 538 for estimating Clinton's chances at a mere 65%.

While I love following the prediction markets for this year’s election, the most popular and widely quoted website out there, fivethirtyeight.com, has something tragically wrong with its presidential prediction model. With the same information, 538 is currently predicting a 65 percent chance of a Clinton victory

As for The NY Times, their final prediction was

“Hillary Clinton has an 85% chance to win”

It's easy to criticize in hindsight. But why do people keep doing this? Why do naive people keep believing this kind of faux-technocratic nonsense? It just leads people to damaging self-delusion, not just in politics but in business and markets.

Elaborate models and data are no defense against wishful thinking. “Big data” does not protect you against many kinds of error. Monte Carlo simulations can be foolish. How could people possibly put a 98% chance on an election that was close to the margin of error in the polls, especially after the lessons of the shock results of Brexit, the Greek referendum and many others?

But they did. Financial markets were bamboozled, for example. Again.

Reuters: Wall Street Elite stunned by Trump triumph.

We need a better way to do this. Instead of models, you need an antimodel, which is what I am developing.

By | November 9, 2016|Assumptions, Confirmation bias, Forecasting, Politics|0 Comments

There is a deeper pattern in why the GOP Establishment is in trouble

“What went wrong?” is the underlying question in David Frum’s much-talked about piece in the Atlantic on the “Republican Revolt”.  How could an establishment candidate like Jeb Bush, who was expected to be almost irresistible and has raised more than $100 million now be running at 3-4% in the polls? How can the GOP primary race have been hijacked by a reality TV star at the expense of experienced Governors and Senators?

Let’s leave aside the betting on who will finally get the nomination, or how good or bad Trump is, as  most of the media focuses on little else and most journalistic speculation is essentially useless. To be sure, despite his consistent lead, the Donald may not be inevitable as the field thins out and the ‘ground game’ of turnout becomes important.

It’s just that as of now, some of the most powerful, elite  and supposedly expert people in US politics look like losers. This is really not where they want to be.

So let’s coolly step back and  look at the pattern. How could the establishment miscalculate so badly?  What does this tell us about why decisions go wrong? Could other elites have the same problem?

In a nutshell, people refused to see contrary evidence.

Many establishment policies were not popular with the GOP base, Frum says. Less than 17% favored cuts in social security, for example.  Most wanted more deportations of illegal immigrants, the exact opposite of a pathway to citizenship.

As a class, big Republican donors could not see any of this, or would not. So neither did the politicians who depend upon them. Against all evidence, both groups interpreted the Tea Party as a mass movement in favor of the agenda of the Wall Street Journal editorial page. One of the more dangerous pleasures of great wealth is that you never have to hear anyone tell you that you are completely wrong.

They could not see things that did not fit in their frame. They could not learn from errors or defeats. The establishment had been shocked at Romney’s loss in 2012, for example.

And yet, within hours of Romney’s defeat, Republican donors, talkers, and officials converged on the maximally self-exculpating explanation.

That meant Republican leaders decided the problem was Romney’s talk about more immigration enforcement alienating Latinos, the very issue where the establishment differed most with their base and where hard evidence of votes to be gained in the center was (Frum says)  mostly lacking.

Otherwise, the party yielded on nothing and doubled down on everything. No U-turns. No compromises.

Instead of adjusting to minimize or forestall the chance of a revolt, or finding a smart alternative way forward, the leadership  interpreted things in self-serving terms and escalated.

This, of course, is a problem that is extremely widespread and not confined to the GOP. We saw exactly the same thing on all sides in the last midterm elections.

Perhaps the establishment will be able to adapt now that their problem is (you would think) undeniable and it is darkest before dawn for them. Or they can double down again. But serious damage has been done, and some ground rules of US politics – like the importance of raising money – have been rewritten.

Here’s the takeaway. Once again we see in this example that the fundamental problem with decisions is not really bias, or lack of formal rigor, or failure to gather data. It’s that people most often  don’t change their mind in response to evidence. Or  they fail to adapt until so late in the game that all the choices are bad. That’s what we need to fix, and would save countless billions of dollars and tens of thousands of companies and careers.

The most brilliant investors intuitively realize this. But as this incident demonstrates, most leaders and managers and policymakers do not. They are surrounded by yes-men. They stick with the familiar. They are clever enough to explain away facts which do not fit their narrative.

People get stuck, and persist too long in self-delusion. They fail to adapt and move when they still have the chance. If you can mitigate that, you can do more than most crystal balls could ever do. After all, if the only thing you see in a crystal ball is your own wishful thinking, what good is it?

 

By | December 27, 2015|Confirmation bias, Current Events, Decisions, Politics|Comments Off on There is a deeper pattern in why the GOP Establishment is in trouble

The sun doesn’t go round the earth, after Paris

Evidence should be a fundamental part of any discussion of what to do in the wake of the Paris bombings, I said yesterday.  Do you agree with that? Instead we most often make assumptions about “what the terrorists want” or discuss things on such an abstract level (“they hate freedom”) that there’s little link to reality at all.

The trouble goes much deeper, though, because even when people use evidence (which is something to be grateful for), they cherry-pick it. It’s riddled with confirmation bias, and it mostly doesn’t prove anything at all.

Remember this in reading all the op-eds from experts on terrorism and the Middle East you’ll see in the next few weeks:  the success of expert predictions in this area is about as good as dart-throwing chimps.  Predictions from the most learned Syria and ISIS and intelligence experts are likely to be useless, just like most economic and political predictions. People can know almost everything about the issue – and still get things completely wrong.

If gathering information and evidence alone clearly isn’t enough, what do we do?

Here’s the further essential thing to grapple with; the most likely explanation or hypothesis is not the one with the most information lined up for it. It’s the one with the least information against it.

That rule is taken from Richards Heuer’s Psychology of Intelligence Analysis, and lies at the root of his method of Analysis of Competing Hypotheses

The root of the problem is that most information can be consistent with a whole variety of explanations. You can integrate it into a number of completely different and satisfying and incompatible stories. That means the most genuinely useful information is diagnostic; that is only consistent with one or a few explanations. It helps you choose between different explanations.

Think of it this way. There was plenty of seemingly obvious evidence for thousands of years that the sun went round the earth.  The fact that the sun rises and sets could be read to be consistent with the either the sun or the earth at the center of the solar system. So that evidence doesn’t help very much. You need to find evidence that can’t be read in favor of both. (That’s another story).

But that essential diagnostic information can be surprisingly difficult to find, especially because people rush to find facts that fit with their existing views.

What happens instead is the more information people gather, the more (over)confident they become of their point of view, regardless of the validity or reliability of the information.  They don’t think about the information. They just more or less weigh the total amount of it.

So what is needed instead is a kind of disconfirmation strategy.

Hold on, you might say. Doesn’t this mean we have to stop and think for a moment before jumping to our favorite recommendation? And isn’t that a pain which we’d rather avoid? Isn’t that uncomfortable?  Isn’t this a little austere and unglamorous compared with colorful and vivid stories and breathless reporting?

Yes. Repeat: Yes. All the information and opinion and sourcing and satellite photography in the world doesn’t help you if you ignore disconfirmation. It’s a lot less painful than wasting billions of dollars and potentially thousands of lives, and failing.  There’s some very practical ways to do it, too.

 

 

By | November 16, 2015|Assumptions, Confirmation bias, Expertise, Security|Comments Off on The sun doesn’t go round the earth, after Paris

Unlearn what you know, or go extinct

 

People can show remarkable dexterity (or self-deception) at deferring blame when a situation goes badly wrong, like a company collapse, or a foreign policy crisis. Or the FBI knocking on your door, asking for hard drives with top secret e-mails on them. How could someone have foreseen it? It was business-as-usual, everyone did it, it was tried and tested. The problem was impossible to see and therefore no-one is to blame. Or just bad luck.

Unfortunately, that's almost never true.

In every crisis we studied, the top managers received accurate warnings and diagnoses from some of their subordinates, but they paid no attention to them. Indeed, they sometimes laughed at them.

That’s the conclusion from one of the classic studies of organizational failures, Nystrom & Starbuck in 1984. Some people in a company generally always see problems coming (we’ve seen other research about “predictable surprises” here). But senior managers find it extremely difficult to “unlearn” parts of what they know.

Organizations succumb to crises largely because their top managers, bolstered by recollections of past successes, live in worlds circumscribed by their cognitive structures. Top managers misperceive events and rationalize their organizations’ failures. .. Because top managers adamantly cling to their beliefs and perceptions, few turnaround options exist. And because organizations first respond to crises with superficial remedies and delays, they later must take severe actions to escape demise.

Instead, the researchers say, managers try to “weather the storm” by tightening budgets, cutting wages, introducing new metrics or redoubling efforts on what has worked before. They typically waste time, and defer choices. In the meantime, the firm filters out contrary evidence, and often gets even more entrenched in its ways. This is normal corporate life.

… well-meaning colleagues and subordinates normally distort or silence warnings or dissents. .. Moreover, research shows that people (including top managers) tend to ignore warnings of trouble and interpret nearly all messages as confirming the rightness of their beliefs. They blame dissents in ignorance or bad intentions – the dissenting subordinates or outsiders lack a top managers perspective, or they’re just promoting their self-interests, or they’re the kind of people wo would bellyache about almost anything. Quite often, dissenters and bearers of ill tidings are forced to leave organizations or they quit in disgust, thus ending the dissonance.

And then one morning it turns out it’s too late, and there is no more time.

The only solution that reliably works, Nystrom and Starbuck say, is to fire the whole top management team if there are signs of a crisis. All of them.

But top managers show an understandable lack of enthusiasm for the idea that organizations have to replace their top managers en masse in order to escape from serious crises. This reluctance partially explains why so few organizations survive crises.

The only real hope is to adapt before you have to. But the much more likely outcome is senior decision-makers end up eliminated, and destroy their companies and their company towns and employees and stakeholders along the way.

Just think about what might fix this. It isn’t more information or big data , as it will probably be ignored or discounted. It isn’t forecasts or technical reports or new budgets or additional sales effort. It isn’t better or more rigorous theory, or forcing the troops to work harder.

It’s a matter of focusing on and looking for signs about how people change their minds. It’s about figuring out what might count as contrary evidence in advance, and sticking to it. If you’re a senior decision-maker, this might be the only thing that saves you, before some outside investor or opponent decides the only hope is to wipe the slate clean, including you. If you figure out you need it in time. Will you?

 

Another day, another (Apple) disaster for prediction

Barry Ritholz at Bloomberg View is dazzled by the latest Apple earnings, but asks a tough question.

Far beyond what anyone forecast, the figures show Apple arguably had the single-greatest quarterly performance in U.S. corporate history. … The rest of the numbers were, by all accounts, stupendous, enormous, mind-blowing, record-breaking. Yet it seems analysts were, once again, blindsided by the data.

How is it even remotely possible that Wall Street analysts have no idea what the biggest company in the world is doing?

The answer is complex, involving many elements in the quarterly earnings dance. […]. But the shorter answer is that Wall Street analysts are not especially good at forecasting.

The deeper reason, he says, is analysts and traders get locked into narratives, and that leads to confirmation bias and cognitive dissonance.

Traders and investors might prefer the way technical analyst Ned Davis reduced it to its most important element: “Would you rather be right or make money?”

People most often prefer to be “right”, all the way up to the point of losing everything. That is our focus. Prediction sound good in principle, but fails in practice. It’s looking at assumptions and scripts and avoidable mistakes that actually moves the needle.

 

By | January 29, 2015|Assumptions, Confirmation bias, Industry Trends, Market Behavior|0 Comments

What is really interesting about the Midterms is the after-action, self-serving spin

I found it hard to get excited about the huge Republican gains in the mid-terms last week. It’s a great story if you focus mostly on the horse-race aspects – who is up, who is down – and these are of absorbing interest to those on the inside of the process, of course. But the actual consequences are limited, so long as the Senate needs 60 votes to get almost anything done (apart from a few arcane parliamentary budget-related matters), and the President still wields his veto.

Instead, what is fascinating is how people have reacted to the Republican victory since. Naturally, there is an extremely strong tendency to spin it so as not to threaten pre-existing views.  Take the Democrats first. This is Clive Crook for Bloomberg view yesterday:

Supporters of the Democratic Party have many theories to explain the drubbing they were handed on Election Day. The explanations seem to boil down to one basic proposition, however: Voters are too stupid to know what’s good for them.

Let me say it clearly: The Democratic Party will continue to underperform until it learns to take election beatings a bit more personally.

The sheer variety of theories based on the stupidity of voters is what’s so impressive. For instance, the Obama administration’s record is good, and the economy is finally doing better; but voters are too stupid to see that. Or: The policy record is poor and the economy is screwed, which is the Republicans’ fault for paralyzing Washington; and voters are too stupid to see that.

Paul Krugman, as usual, is top of the list of offenders. He takes the Republican surge as evidence of the success of mulish obstruction, not potential unpopularity or problems with liberal policies themselves.

But the biggest secret of the Republican triumph surely lies in the discovery that obstructionism bordering on sabotage is a winning political strategy.

It wasn’t anything to do with unpopularity or difficulties with Democratic policy, you see.

The same applies to Republican explanations of their own victory. To the Republican establishment, success was largely because of beating back crazy Tea Party primary challengers and maintaining disciplined message control. In other words, it was wholly thanks to …  the Republican establishment, not the mood out in the country itself or even primarily a shift right among voters. For many in the Tea Party, the huge victory last week shows just how wrong the Establishment and the media were to predict that shutting down the government last fall would lead to electoral disaster and voter repudiation. The result (apparently) was the opposite, but only the Tea Party has the guts to take on Washington.

So people are perfectly capable of selectively interpreting events to suit themselves. This is no surprise to the average member of the human race, or anyone who has ever encountered an organization.

But here’s the thing: it also shows most people have immense resistance towards learning from evidence. And organizations are even worse. What matters in practical terms is usually not information or events or surprises, but how people assimilate (or more often, refuse to react) to new evidence. People generally don’t learn from evidence in a smooth or rational way.

This does not matter so much if your job is just commentary or opinion, and there is no penalty for getting things wrong. Journalists and politicians can survive wrong prognostications, but not but failing to be vivid or motivating their followers.  Anyone who has to make actual decisions, however, cannot afford just to ignore contrary evidence. But the great business and market failures mostly come about because smart, able, senior people pervasively do ignore uncomfortable or opposing facts.

 

By | November 11, 2014|Adaptation, Confirmation bias, Current Events, Politics|Comments Off on What is really interesting about the Midterms is the after-action, self-serving spin

How Politics can go Loopy

The midterm elections today will likely just produce the usual cyclical swing against the party in power.  The national debate has been particularly arid this year, largely focused on targeted messages to mobilize the base instead of changing people’s minds.

But much of the difference between people, and points of view, is not about the direct or immediate effects of particular policies, anyway. It’s not about immediate facts, or even always about immediate interests. According to Robert Jervis, in System Effects: Complexity in Political and Social Life,

At the heart of many arguments lie differing beliefs – or intuitions- about the feedbacks that are operating. (my bold)

It’s because, as saw before, most people find it very hard to think in systems terms. Politicians are aware of indirect effects, to be sure, and often present that awareness as subtlety or nuance. But they usually seize on one particular story about one particular indirect feedback loop, instead of recognizing that in any complex system there are multiple positive and negative loops. Some of those loops improve a situation. Some make it worse, or offset other effects. Feedback effects operate on different timescales and different channels. Any particular decision is likely to have both positive and negative effects.

The question is not whether one particular story is plausible, but how you net it all together.

Take the example of Ebola again. The core of the administration case was that instituting stricter visa controls or quarantine in the US might have the indirect effect of making it harder to send personnel and supplies to Africa, and containing the disease in Africa was essential.

That is likely true. It is  story which seems coherent and plausible. But there is generally no attempt to identify, let alone quantify or measure other loops which might operate as well, including ones with a longer lag time. Airlines may stop flying to West Africa in any case if their crew fear infection, for example. Reducing the chance of transmission outside West Africa might enable greater focus of resources or experienced personnel on the region. More mistakes in handling US cases (as apparently happened in Dallas) might significantly undermine public trust in medical and political authorities. You can imagine many other potential indirect effects.

The underlying point is this: simply identifying one narrative, one loop is usually incomplete.

Here’s another example, at the expense of conservatives this time. Much US foreign policy debate effectively revolves around “domino theory”, and infamously so in the case of the Vietnam war.  The argument from hawks in the 1960s was that if South Vietnam fell, other developing countries would also fall like dominoes. So even though Vietnam was in itself of little or no strategic interest or value to the United States, it was nonetheless essential to fight communism in jungles near the Laos border –  or before long one would be fighting communism in San Francisco. Jervis again:

More central to international politics is the dispute between balance of power and the domino theory: whether (or under what conditions) states balance against a threat rather than climbing on the bandwagon of the stronger and growing side.

You can tell a story either way: a narrative about positive feedback (one victory propels your enemy to even more aggression) or balancing feedback (enemies become overconfident and overstretch, provoke coalitions against them, alienate allies and supporters, or if we act forcefully it will produce rage and desperation and become a “recruiting agent for terrorism”.)

The same applies to the current state of the Middle East, where I have a lively debate going with some conservative friends who believe that the US should commit massive ground forces to contain ISIS in the Middle East, or “small wars will turn into  big wars.”  It’s in essence a belief that positive feedback will dominate negative/balancing feedback, domino-style.

But you can’t just assume such a narrative will play out in reality. South Vietnam did fall, after all But what happened was that the Soviet Union ended up overreaching in adventures like the invasion of Afghanistan. The other side collapsed.

The lure of a particular narrative, of focusing on one loop in a system, is almost overwhelming for many people, however. It’s related to the tendency to seize on one obvious alternative in decisions, with limited or no search for better or more complete or relevant alternatives.

The answer is not to just cherry pick particular narratives about feedback loops and indirect effects which happen to correspond with your prior preferences. That usually turns into wishful thinking and confirmation bias. Instead, you need to get a feel for the system as a whole, and have a way to observe and measure and test all (or most of ) the loops in operation.

By | November 4, 2014|Assumptions, Confirmation bias, Cyclical trends, Inertia, Security, Systems and Complexity|Comments Off on How Politics can go Loopy

How people think about Ebola

Isn’t it strange how emotive and ethically high-strung the debate about Ebola has become? Much of the press is flinging accusations about “hysteria” and quarantine rules have led to vicious partisan exchanges. I think it’s better to step back here and ask why epidemiology should have become such a moralized partisan issue. There’s some obvious blind spots here.

Liberals are enraged at the thought of quarantine and travel restrictions, while conservatives have been much quicker to embrace them. Why?  I think it is because of the central importance of the notion of “fairness” in politics.  According to Jonathan Haidt’s fascinating research,  people are sensitive to different moral considerations in much the way they have different taste buds in the tongue,  like sweet or salty. Haidt identifies five (later six) moral taste buds.  Liberals perceive issues almost entirely in terms of just two: care-harm and fairness-equality. Conservatives are receptive to those moral “tastes” but also pick up other values – authority,  loyalty, and sanctity, which are more adapted to group cohesion.  In fact, most people in most global cultures perceive the wider spectrum of moral considerations, perhaps because they have had adaptive value in traditional societies over long spans of time.

This is from an NYT review of Haidt’s  research, but you should read his whole book,The Righteous Mind: Why Good People Are Divided by Politics and Religion.

To the question many people ask about politics — Why doesn’t the other side listen to reason? — Haidt replies: We were never designed to listen to reason. When you ask people moral questions, time their responses and scan their brains, their answers and brain activation patterns indicate that they reach conclusions quickly and produce reasons later only to justify what they’ve decided.

Think about what this means for how people make and anticipate policy decisions. Both sides of the partisan divide tend to talk past each other.

Haidt started out as very liberal, but experiences such as living in India persuaded him that different cultures and people saw things in different ways.

The hardest part, Haidt finds, is getting liberals to open their minds. Anecdotally, he reports that when he talks about authority, loyalty and sanctity, many people in the audience spurn these ideas as the seeds of racism, sexism and homophobia. And in a survey of 2,000 Americans, Haidt found that self-described liberals, especially those who called themselves “very liberal,” were worse at predicting the moral judgments of moderates and conservatives than moderates and conservatives were at predicting the moral judgments of liberals. Liberals don’t understand conservative values. And they can’t recognize this failing, because they’re so convinced of their rationality, open-mindedness and enlightenment.

Haidt isn’t just scolding liberals, however. He sees the left and right as yin and yang, each contributing insights to which the other should listen.

So what has this to do with Ebola? The issue could almost be designed to cleave along this moral perception fracture. Liberals perceive quarantine or restrictions on returning medical personnel or West African visa applicants as highly unfair to the individuals concerned. They are not as receptive to considerations of protecting a particular country or territory from the virus, which is the main focal point for conservatives.  Furthermore, people in general of all persuasions have a hard-time perceiving or acknowledging trade-offs between different values and objectives. In practice, liberals are unwilling to trade even a small amount of fairness for other values, because they believe they don’t have to make a choice.  Hence loosening quarantine restrictions on returning healthcare workers is assumed not to make a disease outbreak in the US more likely, because fair must also be effective. That is a big assumption.

There’s other problems here I’ll come to , including the nature of expertise and the problem of low-probability high-impact risks. Conservatives have other problems I’ll return to.

But suppose you’re a liberal reading this. Do you have to change your view or concede the other side is right? No. Believe what you want, as ardently as you want, and you can think the other side is dumb. But here’s the real point. If you have to make actual decisions,  instead of just rhetorical positions, then whatever your partisan convictions, you can’t expect your particular viewpoint is going to be right every single time. No-one is made with an automatic hotline to god-like omnipotent truth. So set up a few markers for yourself that help tell you when you should reexamine the evidence or change your mind. Just for yourself, have  few guardrails that help you recognize contrary evidence when it doesn’t fit in with your natural instincts or assumptions.

It’s because people tend to instinctively perceive ethical choices in certain ways and then invent reasons to justify their choice that this kind of quasi ethical fight about public policy can get so hard to solve – and hugely dangerous assumptions can get overlooked. So long as you have something to lose if you’re wrong, it helps to understand where the other side is coming from.

 

 

By | October 29, 2014|Assumptions, Confirmation bias, Current Events, Perception, Security|Comments Off on How people think about Ebola

Inflation expectations are vastly important – and vastly misunderstood

 

The collapse in European bond yields has been truly historic this year, with German 10-year bunds now hovering around 0.9%.  Danger lights are flashing. There are obvious explanations: above all, growing deflation fears,  as well as faltering economic data  and Draghi’s comments last week about fiscal support and QE.  Add to that  some safe-haven related flows because of fighting in Ukraine.  The ECB is now in full alarm mode because inflation expectations are dropping rapidly.

The terrifying thing about deflation is that expectations of falling prices can feed on themselves and become self-fulfilling, creating a chain reaction of deep problems in a modern economy.. The question is what can policymakers do about it.

The dirty secret about modern central banking is  that monetary theorists understand very little about the process of expectation formation. That is why so much policy debate drifts into irrelevance.

Economic policymakers usually turn it into a debate about credibility, stemming from the Kydland-Prescott academic tradition which focuses on time consistency and credible commitments. It is often rational to break previous commitments, so why should anyone believe current promises?  It also gets linked to another somewhat stale debate about “adaptive” versus “rational” expectations. For example,  much of the difference in opinion on the FOMC come down in practice  to disagreement about how forward-looking rather than backward-looking consumers are when it comes to inflation expectations. Do consumers and businesses just observe the recent trend, or anticipate problems before they arrive?

The amount of actual empirical work on the matter in all this is negligible, however. It is mostly prescriptive theory rather than descriptive or experimental work. And thinking generally about credibility in policy debates tends to be sloppy, with dozens of traps.  One major lesson is credibility is heavily dependent on the context, not something you can apply to any situation.

The importance of expectations has, however,  led to much more emphasis on policy communication in the last few years, as a matter of practical necessity (and desperation). Monetary policy has become more like theater than math or engineering. How can you sound more credible? How can you make statements believable?  How can you get people to understand your approach?  Hence Yellen’s endless communication committee work on the FOMC before taking the top job.

But the deeper truth is academic economists just don’t have the skills or tools to understand much about communication, because of course  it falls into psychology and organizational science and rhetoric and persuasion instead.  Parsimonious mathematical models are not adequate guides in these realms. And people can reasonably doubt whether policymakers have the skill and capability to deliver, whatever their intentions may be.

Instead, it comes down to asking why people change their minds. That is my main focus in policy issues.  Everyone knows from their own experience that attitude change is often a drawn-out, fraught, conflicted process. People often see only what they want to see for long periods of time. They can be influenced by networks and relationships and trust, by familiarity  and the salience of issues within their larger sphere. They observe facts, but can explain them away or ignore them. (Watch any tv political debate.) There are long time lags and considerable inertia. And many people never change their beliefs at all, regardless of the evidence.

It is also a classic stock-and-flow systems problem. Inflation expectations in particular  are usually very sticky, and take a long time to change.  Think of a bathtub: it potentially takes a lot of drip-drip information (flow)  to change the amount of water in the bathtub (stock), but the system can also suddenly change abruptly  (the bathtub overflows.)   People frequently forget that many policy issues have major stocks -i.e  bathtubs, sinks, buffers – contained within them and so do not react in a linear way to marginal change. There are complex positive and negative feedback loops, and decisive events can change things rapidly. Expectations aren’t simply “adaptive” or “rational” but complex.

Policy tools like fiscal policy and QE most likely do not make much difference to consumer expectations, certainly in the short term. Just ask the Japanese how successful QE and massive fiscal spending has been in putting their economy back on a sustained growth path.

Because there is so much inertia in inflation expectations, it’s more likely that after a few months European expectations will drift back towards 2% again, and the ECB will claim the credit for something they had little to do with. But if inflation expectations really  are becoming destabilized, it could take five or ten years and vast pain to fix the problem.

By | August 28, 2014|Bonds, Central Banks, Confirmation bias, Current Events, Europe, Inertia, Inflation, Monetary Policy, Time inconsistency|Comments Off on Inflation expectations are vastly important – and vastly misunderstood