Psychology

/Psychology

How the most common prescription doesn’t cure bias

One of the main messages of most psychological research into bias, and the hundreds of  popular books that have followed,  is that most people just aren’t intuitively good at thinking statistically. We make mistakes about the influence of sample size, and about representativeness and likelihood. We fail to understand regression to the mean, and often make mistakes about causation.

However, suggesting statistical competence as a universal cure can lead to new sets of problems. An emphasis on statistical knowledge and tests can introduce its own blind spots, some of which can be devastating.

The discipline of psychology is itself going through a “crisis’ over reproducibility of results, as this Bloomberg view article from the other week discusses. One recent paper found that only 39 out of a sample of 100 psychological experiments could be replicated. That would be disastrous for the position of psychology as a science, as if results cannot be replicated by other teams their validity must be in doubt. The p-value test of statistical significance is overused as a marketing tool or way to get published. Naturally, there are some vigorous rebuttals in process.

It is, however, a problem for other disciplines as well, which suggests the issues are  genuine, deeper and more pervasive. John Ioannidis has been arguing the same about medical research for some time.

He’s what’s known as a meta-researcher, and he’s become one of the world’s foremost experts on the credibility of medical research. He and his team have shown, again and again, and in many different ways, that much of what biomedical researchers conclude in published studies—conclusions that doctors keep in mind when they prescribe antibiotics or blood-pressure medication, or when they advise us to consume more fiber or less meat, or when they recommend surgery for heart disease or back pain—is misleading, exaggerated, and often flat-out wrong. He charges that as much as 90 percent of the published medical information that doctors rely on is flawed.

The same applies to economics, where many of the most prominent academics apparently do not understand some of the statistical measures they use. A paper (admittedly from the 1990s) found that 70% of the empirical papers in the American Economic Review, the most prestigious journal in the field,” did not distinguish statistical significance from economic, policy, or scientific significance.”  The conclusion:

We would not assert that every econoimst misunderstands statistical significance, only that most do, and these some of the best economic scientists.

Of course, the problems and flaws in statistical models in the lead up to the great crash of 2008 are also multiple and by now famous. If bank management and traders do not understand the “black box” models they are using, and their limits, tears and pain are the usual result.

The takeaway is not to impugn statistics. It is that people are nonetheless very good at making a whole set of different mistakes when they tidy up one aspect of their approach. More statistical rigor can also mean more blind spots to other issues or considerations, and use of  technique in isolation from common sense.

The more technically proficient and rigorous you believe you are, often the more vulnerable you become to wishful thinking or blind spots. Technicians often have a remarkable ability to miss the forest for the trees, or twigs on the trees.

It also means there are (even more) grounds for mild skepticism about the value of many academic studies to practitioners.

By | March 30, 2016|Big Data, Economics, Expertise, Psychology, Quants and Models|Comments Off on How the most common prescription doesn’t cure bias

Initial information distorts how you see a (emerging market) crisis

I’ve often talked about how people change their mind in an uneven, misshapen way. One common pattern is to leap to conclusions on the basis of limited and distorted initial information. That then becomes the frame through which subsequent events are interpreted. It is much more difficult for people to change their mind once those initial pieces of information have solidified, even if the first reports turn out to be wrong.

This often plagues intelligence analysis of volatile economic and political developments.  Richard Heuer wrote The Psychology of Intelligence Analysis for the CIA, which is now declassified.

People form impressions on the basis of very little information, but once formed, they do not reject or change them unless they obtain rather solid evidence. Analysts might seek to limit the adverse impact of this tendency by suspending judgment for as long as possible as new information is being received. ..

Moreover, the intelligence analyst is among the first to look at new problems at an early stage when the evidence is very fuzzy indeed. The analyst then follows a problem as additional increments of evidence are received and the picture gradually clarifies–as happened with test subjects in the experiment demonstrating that initial exposure to blurred stimuli interferes with accurate perception even after more and better information becomes available.

The receipt of information in small increments over time also facilitates assimilation of this information into the analyst’s existing views. No one item of information may be sufficient to prompt the analyst to change a previous view. The cumulative message inherent in many pieces of information may be significant but is attenuated when this information is not examined as a whole.

Exactly the same thing applies to financial crises. That’s a particular danger with emerging market crises, where traders and investment managers in developed markets often have little or no familiarity with the underlying economic and political context in a country. There is usually a mad market scramble to learn the basic dynamics of the country in question as soon as it hits the headlines – who leads the main parties, how many months import cover in the reserves, the state of the capital account. (The best first step is almost always the IMF Article IV and staff reports.) Traders jump to becoming instant experts on the banking structure of Cyprus or political maneuvers in Thailand.

When the next EM panic hits, it is essential to be very alert to the quality of the initial information before you lock in a view. There is a temptation to want to appear fully informed and decisive too early, and then spend the remainder of the crisis trying to catch up and fix initial errors.

It also pays to remember country analysts with deep knowledge of local dynamics are frequently the most surprised by developments. (Recall how many Sovietologists predicted the downfall of the USSR.)

 

People make characteristic mistakes in dealing with crises, but it is possible to recognize many of them in advance.

By | February 2, 2014|Assumptions, Crisis Management, Current Events, Decisions, Financial Crisis, Psychology|Comments Off on Initial information distorts how you see a (emerging market) crisis

When More Information Makes You Worse Off

How can more information make you worse off? The human mind has a tendency to jump to conclusions based on even very small amounts of irrelevant evidence. In fact, giving people obviously worthless but innocent additional information can throw people’s judgment off.

Let’s take an experimental example. One of the most important things ever written about decisions and blind spots is the collection of papers Judgment under Uncertainty: Heuristics and Biases edited hy Kahneman, Tversky and Slovic in 1982. You can’t get further than the second page of the first article (p4) before you come to this example.

Kahneman and Tversky ran an experiment to investigate the effect of base rates on prediction. They showed subjects some brief personality descriptions, supposedly chosen at random from a group of 100 engineers and lawyers. In one run, they told subjects the group had 30 engineers and 70 lawyers. In another run, they said there was 70 lawyers and 30 engineers. The experimental victims were asked to estimate what the chances were each of the chosen descriptions was an engineer or a lawyer.

If the subjects were only told that 70% of the group were lawyers, they correctly estimated the chances a person was a lawyer at 70%. But if they were given any information about personality, the subjects simply ignored the information about the proportions of the sample. They judged the case almost completely on whether the personality description matched the stereotype of the personality of an engineer or lawyer. (This is an example of representativeness bias. A correct Bayesian estimate should still incorporate prior probabilities. )

But even worse, when people were given obviously worthless information, they still  ignored the base rates altogether. Subjects were told:

“Dick is a 30 year old man. He is married with no children. A man of high ability and high motivation, he promises to be quite successful in his field. He is well liked by his colleagues. “

According to the researchers,

This description was intended to convey no information relevant to the question of whether Dick is an engineer or a lawyer. ..The subjects, however, judged the probability of Dick being an engineer to 0.5 regardless of whether the stated proportion of engineers in the group was .7 or .3. Evidently, people respond differently when given no evidence and when given worthless evidence. When no specific evidence is given, prior probabilities are properly utilized; when worthless evidence is given, prior probabilities are ignored.

What does this tell us?  Base rate neglect is  one of the most dangerous blind spots in decisions. And people will seize on invalid and misleading specific information given half a chance, especially if it is consistent with stereotypes or prior expectations. Even just giving people benign but  irrelevant information can make them lose track of base rates or other key features about a situation.

People have very serious blind spots when they have specific, particular information, like when they have just interviewed a primary source.

There’s an additional 515 pages of potential problems following that example in this book alone.

This discipline of looking at heuristics and biases is not the whole story. Some people have come to grief applying it in a formulaic way.  Its great virtue – and flaw – it it is mostly based on lab experiments, where people most often have deliberately limited relevant knowledge. The field has a tendency to want to replace human decision-makers with statistical models altogether.  There’s plenty of other essentials, like looking at the limits to expertise in real situations. But the Kahneman and Tversky Judgment-and Decision-Making (“JDM”) tradition  is an essential starting place.

 

 

 

By | January 5, 2014|Base Rates, Decisions, Outside View, Psychology|Comments Off on When More Information Makes You Worse Off

Effective Experts don’t use more information

The biggest problem decision-makers face is rarely getting information any more. Everyone is swamped with information. Just look at your twitter feed alone. Instead, the problem is recognizing what matters.

I’ve talked before about patterns of flaws in experts, and how the actual outcomes of their predictions can often get steadily worse the more famous and credentialed they are (because “hedgehogs” get more media attention than “foxes.”) This will come as no surprise to anyone who has watched talking heads on cable tv.

That said, the fundamental mark of expertise is still recognizing the key factors in a situation.

Here’s how one psychologist who is an expert on expertise puts it. Experts, although they may know more, do not in practice use more information or cues than novices. Often they use less.

Although experts obviously have the ability to access large amounts of information, their performance on any one task reflects only limited information use. Apparently what is acquired in moving from mid-level novice to high-level expert is not the ability to access more information.

So what is it that distinguishes experts from non-experts? What separates the expert from the novice, in my view, is the ability to discriminate what is diagnostic from what is not. Both experts and novices know how to recognize and make use of multiple sources of information. What novices lack is the experience or ability to separate relevant from irrelevant sources. Thus, it is the type of information used – relevant vs. irrelevant – that distinguishes between experts and others.

How Much Information Does An Expert Use? Is It Relevant? By James Shanteau (1992)

Experts usually don’t use all the information they have, and they use less of it than they should (which is why dumb linear models usually outperform them.). But experience counts in figuring out what is relevant.

What does this mean in practice? You probably don’t use the information you already have to the extent you could and should. You will almost always do far better by enhancing your ability to recognize the essential elements in the information you have than by paying for more information.

What does Shanteau mean by “diagnostic?” People most often choose a view , or hypothesis, and then look for evidence to confirm it. But in most situations there are several good potential explanations or hypotheses, and evidence is most often consistent with many of them. What matters is not the weight of evidence in favor of a hypothesis, but evidence which distinguishes between hypotheses – diagnostic information – and disconfirms some of them.

Diagnostic information matters. The trick is to recognize which information is diagnostic.

By | October 10, 2013|Confirmation bias, Decisions, Expertise, Perception, Psychology|Comments Off on Effective Experts don’t use more information

Summer Stretchiness: how people resist changing their view

So here we are on the brink of August, which is either a month of complete torpor in markets, or occasionally crises break out like thunderous summer storms. Right now, it seems calm. I’ve just got back from traveling in Europe and out west in North America, and the major issues are the same they have been for months: when will the Fed begin tapering, and how will the market react? Will China be able to defer economic slowdown? Will Europe slip back into crisis?

All of these issues are not just matters of the data. They involve how long decision-makers can defer taking action, and how much they will react (or fail to react) to events.

In an ideal world, there would be a one-to-one relationship between clear evidence and the decisions people take as a result. In practice, that usually doesn’t happen.

The most important thing I always find I need to understand when talking to senior decision makers is how they shift and update their views. Economists and central bankers sometimes call it the “reaction function”, but they typically underestimate the issues involved.

The core of the problem is people don’t alter their views in a neat, smooth, rational way, especially when they have emotion or reputation invested in their current viewpoint.

In fact, there is often serious naivete about how people think through tough decisions. If it was simply a matter of correct analysis of the data alone, macro and market life would be much simpler. For one thing, central bankers usually think not so much about the raw data or information- payrolls, say – but why events are unfolding that way. And data is usually consistent with a number of different explanations. It’s rare that a single data point, or even a trend, can’t be explained away.

Then there’s the problems which come from just simply communicating a point of view. Markets frequently misunderstand what the Fed and other policymakers are trying to convey, because they have different assumptions and words can mean very different things to different groups. People misunderstand objectives, intentions and the trade-offs between conflicting goals.

An even deeper problem is people see what they want to see, and expect to see, until the evidence is almost overwhelming against it. Decision-makers update their views in an uneven, misshapen way. They talk past each other. They point to different, contradictory evidence. They seek illusory consistency and ignore facts which do not match their views. They have blind spots, where they simply can’t see or recognize problems.

This is a matter of common sense and common experience. Just think of debating politics with someone of very partisan and opposite views (your cousin or co-worker, say.) There is a slim to nil chance they will walk away convinced by anything you say.

Sometimes events do alter views, at least a little. The question is when and how.

I’d call this the “elasticity” of viewpoints, except it sounds a little too precise and a little too similar to price elasticity in standard economics.

So I call it the “stretchiness” of viewpoints and mindsets. It means one of the most important questions to ask about policy is “what evidence would change the decision-makers’ mind?” And fifty years of research into how people make decisions says people tend to get this wrong.

It’s not always what might appear to be the most important or dramatic information. It’s more often the things that shake faith in the underlying mental model which can’t be evaded or set aside.

That, of course, means you have to be very aware of what someone else’s underlying mental model and assumptions actually are. The trouble is that doesn’t come naturally to most people. Research shows people tend to be surprisingly unaware of how their own views fit together, and how they change, let alone anyone elses’.

In fact, the smarter and more able you are, the more likely you are to ingeniously find reasons to stick with your current view. The timing is wrong. Or there’s issues with the data. Or the plan hasn’t been implemented thoroughly enough. Or another set of data points to a different conclusion.

So as the big decisions loom in coming months, I’ll be asking how and why people shift their views, and where the blind spots are.

By | July 29, 2013|Central Banks, Confirmation bias, Decisions, Irrational Consistency, Psychology|Comments Off on Summer Stretchiness: how people resist changing their view

The Tylenol Test

There is a certain cast of mind that finds it very difficult to grasp or accept that something may be a very good idea up to a point – but not beyond it.

Some people yearn for universal rules. If the tools of economics or quantitative methods or Marxism or sociology or Kantian ethics or government regulation have any usefulness, they believe, they ought to apply to everything, always. If there is a problem, the answer is always even more of the same.

But common sense says that’s not actually true. Two Tylenol pills will cure a headache. That does not mean taking forty pills at once will be even better.

Call that the Tylenol Test. What is good in small doses can kill you in excess.

You’ll see I often talk about the limits of things here, whether it’s big data or academic economics or the impact of monetary policy. That does not mean those things – and others like them – are not useful, or valuable, or sometimes essential. I support all of them extremely strongly in the right circumstances. They can clear confusion and cure serious headaches.

It’s not a binary good/bad distinction.

Instead, it’s the right medicine at the right time in the right amount that keeps you healthy and makes you ready to seize opportunities. (That echoes the ancient Aristotelian idea of the golden mean, of course.)

If you overdose, you’re in trouble. Maybe that’s a universal rule….

By | May 17, 2013|Big Data, Central Banks, Decisions, Economics, Excess, Irrational Consistency, Psychology|Comments Off on The Tylenol Test

When the stopped clock is right

I talked in the post below how differences about policy or macro expectations are seldom resolved by data alone. It is rare that people change their minds or expectations because of a single piece of data (or even huge volumes of data).

As the great physicist Max Planck said, in his Scientific Autobiography ( a little pessimistically),

A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.

This is a much more general problem beyond science, as well. Here is one of the classic works about how leaders make decisions in international affairs:

Those who are right, in politics as in science, are rarely distinguished from those who are wrong by their superior ability to judge specific bits of information. .. Rather, the expectations and predispositions of those who are right have provided a closer match to the situation than those who are wrong. Robert Jervis, Perception and Misperception in International Politics

This is like the stopped clock problem, when of course the clock will be correct twice a day even if it never changes. People who are consistently, relentlessly bearish will be proven right sometimes, when the market plunges (and vice versa). Those who claim to have predicted the great crash in 2008 were often suddenly lucky to have their predisposition temporarily match the situation.

But waiting to be occasionally right is not good enough if you have to make successful decisions. Instead, you need to be much more explicit about testing how viewpoints and perspectives fit together, and more rigorous about finding ways to identify blindspots and disconfirm them. It doesn’t matter how much information you gather if you don’t use it to test your assumptions and predispositions.

 

By | May 14, 2013|Decisions, Irrational Consistency, Perception, Psychology|Comments Off on When the stopped clock is right

Criticism is useless without curiosity

I looked at advice by Buffett and Dalio earlier this week. Seek out criticism, they say.

It isn’t criticism for its own sake which is valuable, however. You don’t necessarily gain much from someone yelling at you or tell you you are doing everything wrong. Neither do you necessarily gain by triumphantly refuting someone else’s objections.

Instead, the trick is to be able to step outside your own perspective and see how facts could fit another explanation. It’s understanding the difference between perspectives which is the key, rather than just arguing loudly from different positions.

There’s an interesting case study in Gary Klein’s book Streetlights and Shadows: Searching for the Keys to Adaptive Decision Making about  the limits of feedback, including the ability to make sense of it or shift mental models. Klein specializes in “naturalistic” decision-making – how skilled people actually make urgent decisions in the field under pressure, rather than at leisure with spreadsheets. I mentioned one of his previous books in Alucidate’s conceptual framework here.

Doug Harrington was a highly skilled pilot who had landed on aircraft carriers in F-4 aircraft hundreds of times. But he kept failing to qualify to land an A-6 aircraft, despite continued feedback from landing signal officers (LSO) on the ship. “Veer right”, the ground control repeatedly told him on every approach. But the feedback didn’t help work out what was wrong. He faced the immediate end of his naval flying career, or worse, a crash into the back of a ship.

The Chief Landing Officer eventually asked Harrington how he was lining up the plane. It turned out the A-6 has the cockpit laid out with side-by-side seats rather than navigator behind the pilot. That slight difference in perspective threw off the pilot’s  habitual way of lining up the nose of the plane against the carrier. Feedback and criticism alone didn’t help him figure out what was wrong. A small shift in perspective did.

The LSO was not a coach or a trainer. He didn’t give any lectures or offer any advice. He didn’t have to add any more feedback. What he brought was curiosity. He wanted to know why a pilot as good as Harrington was having so much trouble. He used Harrington ‘s response to diagnose the flaw in Harrington’s mental model. Then the LSO took the interaction another step. Instead of just telling Harrington what was wrong, the LSO found an easy way for Harrington to experience it. Harrington already suspected something might be wrong with his approach. The simple thumb demonstration was enough for Harrington to form a new mental model about how to land an A-6.

Mental models, or mindsets, are more important than criticism or argument in isolation.

It’s not just a matter of criticism, but curiosity. I’ve always found the most successful decision-makers and traders are the ones who want to know how other people think

 

By | May 8, 2013|Books, Decisions, Mindfulness, Perception, Psychology|Comments Off on Criticism is useless without curiosity

Confirmation bias and gun control

The media is digesting the defeat of gun control legislation in the Senate yesterday, and there is a valuable insight here. Set aside the partisan debate and what you think about the policy arguments for a moment, although for the record I’d personally I’d be all for background checks. Why were so many people surprised the bill was defeated in the Senate?

I agree with Walter Russell Mead:

As is so often the case in American politics, those who produce MSM [mainstream media] coverage and those who rely exclusively on it for news were the last to know what was happening. We’ve seen almost nothing but optimistic and encouraging coverage of gun control efforts, ending as usual in painful failure and disillusion. Many gun control advocates and their allies in the MSM are stupified and stunned by the votes.

This was stupidity at work; the MSM mistook its wishes and its dreams for events, and spun itself into a beautiful and comfortable cocoon. This never made sense to us; at Via Meadia we predicted again and again that gun control advocates were riding for a fall.

Small, well-funded and highly motivated blocking minorities have disproportionate power on almost all issues, on both left and right.

The point I want go make is this: it is a classic example of the root cause of so much mistaken prediction and failed decisions in general. Psychologists call it confirmation bias. Almost everyone (not just the mainstream media) has a deep tendency to see what they want to see. They get locked into their specific perspective and frame, and fail to see evidence that counts against that perspective. It is disastrous for making good decisions.

We look for evidence that confirms what we think, and explain away or ignore data points that do not fit.

If you have more at stake in a decision that just embarrassment, though, you need to be alert for confirmation bias and the pervasive tendency to take your own assumptions and perspective for granted. Seeing only what you want is the single biggest barrier to getting what you want.

 

By | April 18, 2013|Confirmation bias, Decisions, Politics, Psychology|Comments Off on Confirmation bias and gun control

How to avoid catastrophic decisions, brought to you by fifty years of painful intelligence failures

Spare a thought for the President having to deal with two rogue bomb-making powers at once. You have secret daily briefs in the Oval Office and important advisors in uniforms. But you know the record of post-war intelligence is you can get disastrously poor guidance.

Iraq WMDs, anyone?

Think of what you could do, as a hedge fund investor or manager, if you had virtually unlimited resources running to hundreds of billions of dollars, advanced technology unavailable to the general public, keyhole satellites observing every time your opposition so much as walked outside, the ability to use covert operatives and avoid most foreign legal restrictions, and a staff of thousands of analysts in the Virginia woods.

Yet the CIA and the rest of the intelligence community has still managed to get many if not most major events wrong in the last sixty years, from the first Iranian revolution, to Indian nuke tests, to the fall of the Soviet Union, to 9/11.

Why? It’s not lack of ability. These are smart people, sometimes very smart (and they do as well as many other experts). It’s not lack of information, either. Most studies find the most important information was clearly available.

It’s usually the assumptions and the prior mindset that are the problem.

Here is the leading postwar expert on intelligence analysis inside the CIA, Jack Davis, introducing a book-length internal report named Psychology of Intelligence Analysis by Richards Heuer. It has been declassified and you can find it here. It tries to distill the traps and pitfalls that cause intelligence assessments to go wrong.

How many times have we encountered situations in which completely plausible premises, based on solid expertise, have been used to construct a logically valid forecast–with virtually unanimous agreement–that turned out to be dead wrong? In how many of these instances have we determined, with hindsight, that the problem was not in the logic but in the fact that one of the premises–however plausible it seemed at the time–was incorrect? In how many of these instances have we been forced to admit that the erroneous premise was not empirically based but rather a conclusion developed from its own model (sometimes called an assumption)? And in how many cases was it determined after the fact that information had been available which should have provided a basis for questioning one or more premises, and that a change of the relevant premise(s) would have changed the analytic model and pointed to a different outcome?

The same applies to business and investment decisions. It’s not usually faulty information which gets people into the most trouble. It’s ignoring facts which do not correspond with the view you already have. That’s why you have to test your assumptions and perspective, which is where we come in.

By | April 8, 2013|Assumptions, Confirmation bias, Decisions, Perception, Psychology, Security|Comments Off on How to avoid catastrophic decisions, brought to you by fifty years of painful intelligence failures