Home/Base Rates

The Right Level of Detachment

One of the keys to why decisions fail is that we often have a tendency to look for universal, across-the-board rules for what to do. Yet success most often comes from maintaining the right kind of balance between different rules or requirements. Remember, it’s usually the assumptions people make which are likely to cause them to lose money or fail.

In fact, one of the best ways to look for those blind spots and hidden assumptions is to look for the balances that have been ignored.

One of the most important of these is the right balance between looking at a situation in general and specific terms.

Take the last post about medical errors. A physician or surgeon has to be alert to the immediate details of a particular patient. You can’t ignore the specifics. You can’t assume distinctive individual characteristics are infernal from the general.

This happens a lot in applied or professional situations. Good reporters or detectives or  short-term traders also often tend to focus on “just the facts, ma’am”, and get impatient with anything that sounds abstract. Their eyes glaze over at anything which is not immediate and tangible. Foreign exchange traders have traditionally often known next to nothing about the actual underlying countries or economics in a currency pair, but are very attuned to short-term market conditions and sentiment.

But the essence of expertise is also being able to recognize more general patterns. The most seminal investigation of expertise was Chase and Simon’s description of “chunking” in 1973. They investigated chess grand masters, and found that they organized the information about pieces on the board into broader, larger, more abstract units. Years of experience gave grand masters a larger “vocabulary” of such patterns, which they quickly recognized and used effectively. More recent work also finds that

Experts see and represent a problem in their domain at a deeper (more principled) level than novices; novices tend to represent the problem at a superficial level. Glaser & Chi, quoted p.50, The Cambridge Handbook of Expertise and Expert Performance

Indeed, one of the biggest reasons experts fail is because they fail to use that more general knowledge: they get too close-up and engrossed in the particular details. They show off their knowledge of irrelevant detail and “miss the forest for the trees.” They tend to believe This Time Is Different.

This is why simple linear models most often outperform experts, because they weigh information in a consistent way without getting caught up in the specifics. It is also why taking what Kahneman calls the “outside view” and base rates are essential in most situations, including project management.  You have to be able to step back and think about what generally happens and that requires skill in perceiving similarities and analogies and patterns.

People’s mindsets often tend to push them to one extreme or the other. Too little abstraction and chunking? You get lost in specific facts and noise and don’t recognize patterns. You don’t represent the problem accurately, if you see it at all.  You get too close, and too emotive, and too involved. On the other hand, too much detachment, and you lose “feel” and color and tacit knowledge. You become the remote head office which knows nothing about front line conditions, or the academic who is committed to the theory rather than the evidence. You lose the ability to perceive contrary information, or see risks that do not fit within your theory.

You need the right balance of general and specific knowledge. But maintaining that kind of balance is incredibly hard, because people and organizations tend to get carried away in one direction or the other. Looking for signs of that can tell you what kind of mistaken assumptions or blind spots are most likely. Yet people hardly ever monitor that.

 

2017-05-11T17:32:42+00:00 December 15, 2014|Assumptions, Base Rates, Decisions, Expertise, Outside View|

Look for the balances

One of the most intractable policy problems is  people have a strong tendency to ignore or deny any downside  or side-effects in their actions. Instead, they prefer talk in terms of unitary goals, universal rights, and clear, consistent principles. Much of the educational training of elites, especially economists,  inclines them to deal with generalizations.  Hedgehogs are temperamentally inclined to look for single overarching explanations.

In actual practice, however,

The need to adapt to particular circumstances … runs counter to our tendency to generalize and form abstract plans of action.

Dörner is the brilliant German psychologist I mentioned last week, who studies the “logic of failure” and the typical patterns of errors people make in decisions.  Many of the essentials of better decision-making come down to maintaining threading your way between two opposite errors, and so maintaing a balance.

One problem is ignoring trade-offs and incompatibilities between goals:

Contradictroy goals are the rule, not the exception, in complex situations.

It is also difficult to judge the right amount of information-gathering:

We combat our uncertainty either by acting hastily on the basis of minimal information, or by gathering excessive information , which inhibits action and may even increase our uncertainty. Which of these patterns we follow depends on time pressure, or the lack of it.

It  is  also difficult to judge the right balance of specific versus general considerations: i.e how unique a situation is.  Experts typically overestimate the unique factors (“this time it’s different”) at the expense of  the general class of outcomes, or what Kahneman calls the “outside view.”  (“what usually happens in these situations?”) The necessities of political rhetoric and motivating people to take action also often requires politicians to ignore or downplay conflicting goals.

So in any complicated situation, judgment consists of striking many balances. One of the most useful ways to look for blind spots is, therefore, to look for the hidden or ignored balances, and trade-offs, and conflicts. And one sign of that is, according to Dorner, that good decision-makers and problem solves tend to use qualified language: “sometimes”  versus “every time”, “frequently” instead of “constantly” or “only.”

Everything at its proper time and with proper attention to existing conditions. There is no universally applicable rule, no magic wand, that we can apply to every situation and to all the structures we find in the real world. Our job is to think of, and then do, the right things at the right times in the right way. There may be rules for accomplishing this, but the rules are local – they are to a large extent dictated by specific circumstances. And that mens in turn that there are a great many rules.(p192)

And that means in turn that any competent or effective policy institution, like a central bank, cannot easily describe its reaction function in terms of clear consistent principles or rules. When it tries, markets and audiences are going to be confused and disappointed.

 

2017-05-11T17:32:43+00:00 September 11, 2014|Base Rates, Expertise, Foxes and Hedgehogs|

What the Fox Knows

Paul Krugman is complaining about Nate Silver's new FiveThirtyEight venture and Silver's manifesto “What the Fox Knows“. Foxes, of course, know “many things” and are eclectic compared to hedgehogs who know “one big thing”. Silver argues for foxes.

Not impressed, says Krugman, who also claims to be a little fox-like because he works in several economic subdisciplines. He is nonetheless highly skeptical about fox-like claims. Hedgehog-like expertise must come first.

This is important, because it affects the way you see the entire policy process and the way decisions ought to be made. How much should you trust experts when they offer policy advice? According to Krugman:

… you can’t be an effective fox just by letting the data speak for itself — because it never does. You use data to inform your analysis, you let it tell you that your pet hypothesis is wrong, but data are never a substitute for hard thinking. If you think the data are speaking for themselves, what you’re really doing is implicit theorizing, which is a really bad idea (because you can’t test your assumptions if you don’t even know what you’re assuming.)

“Hard thinking”, is of course, in this case a euphemism for using standard neoclassical optimizing models in your analysis.

Krugman is essentially arguing about the importance of expertise and theory in economics. But there's three major problems.

First, Krugman assumes you gain knowledge by testing hypotheses against data, basically the Popperian ideal. Unfortunately, that's not what happens in practice. People usually instead just generate auxiliary hypotheses or declare surprising results “anomalous” and don't change their mind. They don't let data tell them their pet hypothesis is wrong. That's why in practical terms, for markets, it's usually more important to watch out for the ways people are not sensitive to evidence. I've learned in practice in order to make or anticipate decisions, looking at the (lack of) elasticity of people's opinions is what matters.

Second, of course economists have expertise on a variety of topics. A lot of bright people have spent decades studying particular issues. The trouble is subject expertise is always very domain-specific. Krugman himself says he has confidence knowing his way around macro data, but not health economics data or labor data.

It is clear that academic experts often stray well outside their domain in giving advice. They take the postage stamp of in-depth knowledge of their subfield and stretch it to cover an area the size of Manhattan. Krugman talks about labor markets all the time.

Academic economists also tend to be very good at developing publishable constrained-optimization models. That skill, however, is not transferable to most other domains, including the policy domain.

And unlike the structure of disciplines and academic departments, confronting most serious policy problems and decisions involve multiple domains and systems, which require different areas of expertise.

This is why reasonably serious decision requires multiple perspectives from different areas of expertise. It has to be weighed and integrated by skill in making decisions, not interior sub-disciplinary knowledge. This is why most organizations tend to want generalists, or people with a variety of experience, to make the most important decisions.

In practice, as I was arguing the other day, most important decisions are not made primarily using theory, but by successive small steps and induction. Actual decision-makers tend to be fox-like, because they have little choice otherwise. Hedgehog-like expertise is less relevant than hedgehogs like Krugman think.

Krugman is right that expecting data to speak for itself is naive, because there will be some implicit theory behind it. But being explicit about assumptions usually doesn't help much in economics, either, because the assumptions are mostly made for mathematical tractability or elegance. The most important assumptions are usually the ones that are so obvious people barely realize they are making them, and it requires a different, outside perspective to even see them. The assumptions you make explicit are not the ones you have to worry about.

Third, there is a deeper issue with expertise involving the relationship of the general and the particular. People tend to find it very easy to go from particular instances to general features (the moral of the story, the lessons of the crisis) but harder to go from the general to the particular (statistical frequencies to the chance an airliner will disappear). This is the problem of base rates, which I've talked a lot about before. People most often tend to ignore general features of a situation.

That also applies surprisingly often to experts, who usually do know the wider details. The reason experts usually perform so badly in predictions or practical decisions is because they ignore much of their knowledge of general features and show off with their knowledge of the particulars. And even when they do apply general features, it is most often done sloppily. Similarity and analogy and metaphor are subject to all kinds of misperception or misjudgment.

The result? Experts like Krugman usually overapply their knowledge unthinkingly in the wrong ways, but gang together in a guild-like way to attack those outside the tribe. They are usually overconfident and unaware of the limits of their expertise.

The overall upshot is foxes tend to make better decisions because they are more likely to allow evidence change their mind. So I'm sympathetic to Nate Silver's position. Krugman is largely defending his own turf, along with other economists he cites. A PhD in Economics tends to attract people with extreme hedgehog dispositions and marks them afterwards.

How people react to evidence is usually more of a guide to outcomes than the evidence itself.

2017-05-11T17:32:45+00:00 March 19, 2014|Assumptions, Base Rates, Decisions, Expertise, Foxes and Hedgehogs, Outside View|

How to go over budget by a billion dollars and other planning catastrophes

I was talking in the last post about base rate neglect. People have a tendency to seize on vivid particular features of a situation to the exclusion of general features. Journalists are particularly prone to this problem, because they have to find vivid details to sell newspapers.

Let’s take a (literally) concrete example: infrastructure spending.  Over $2 trillion has been added to extra infrastructure stimulus spending since the global crisis broke out in 2008, including massive fiscal injections in the US, China and India. So if there are widespread decision errors even as large as just 1% or 5% in this area, the consequences easily run into tens of billions of dollars.

In fact, infrastructure planners massively underestimate the cost of major projects, often by more than 50%. The average cost overrun on major rail projects is 44%, according to recent research:

The dataset .. shows cost overrun in 258 projects in 20 nations on five continents. All projects for which data were obtainable were included in the study. For rail, average cost overrun is 44.7 per cent measured in constant prices from the build decision. For bridges and tunnels, the equivalent figure is 33.8 per cent, and for roads 20.4 per cent. ..

  • nine out of 10 projects have cost overrun;
  • overrun is found across the 20 nations and five continents covered by the study;
  • overrun is constant for the 70-year period covered by the study; cost estimates have not improved over time.

And planners  hugely overestimate the benefits of major projects.

For rail, actual passenger traffic is 51.4 per cent lower than estimated traffic on average. This is equivalent to an average overestimate in rail passenger forecasts of no less than 105.6 per cent. ..

The following observations hold for traffic demand forecasts:

  • 84 per cent of rail passenger forecasts are wrong by more than ±20 per cent
  • nine out of 10 rail projects have overestimated traffic;
  •  50 per cent of road traffic forecasts are wrong by more than ±20 per cent;

The data is from a 2009 paper in the Oxford Review of Economic Policy by Bent Flyvberg: Survival of the unfittest: why the worst infrastructure gets built—and what we can do about it.

Just consider for a moment  what the data mean for the way policy decisions are taken. These are some of the biggest decisions politicians and planners make, and they are systemically biased and inaccurate.  Massive cost overruns are the norm.

The outcome is not just billions of dollars wasted, but, as Flyvbjerg argues, the worst projects with the most serious judgmental errors are the ones that tend to get built. This is pervasive in almost every country in the world.

Promoters overestimate benefits and underestimate costs in order to get approval. Of course they do, you might say. People are often overconfident and oversell things. But this does not get taken into account in the decisions. Recall again: “overrun is constant for the 70-year period covered by the study; cost estimates have not improved over time.”

What can be done about it? According to Flyvbjerg,

If project managers genuinely consider it important to get estimates of costs, benefits, and risks right, it is recommended they use a promising new method called ‘reference class forecasting’ to reduce inaccuracy and bias. This method was originally developed to compensate for the type of cognitive bias in human forecasting that Princeton psychologist Daniel Kahneman found in his Nobel prize-winning work on bias in economic forecasting (Kahneman, 1994; Kahneman and Tversky, 1979). Reference class forecasting has proven more accurate than conventional forecasting. It was used in project management in practice for the first time in 2004 (Flyvbjerg and COWI, 2004); in 2005 the method was officially  endorsed by the American Planning Association (2005); and since then it has been used by governments and private companies in the UK, the Netherlands, Denmark, Switzerland, Australia, and South Africa, among others.

I’ve talked about Kahneman’s outside view before.  As you can see, it’s just another way of saying: don’t ignore the base rate. Adjust your estimates with reference to the general  class of things you are looking at, not just the specific particulars of a situation. It’s seldom easy: another way to describe this is “the planning fallacy” and it applies to just about any major project (including starting a company!).

Here’s what to take away from this: this one particular blind spot is one of the most serious errors in public policy decision-making, with billions of dollars wasted every year.

But it’s not a matter of having to be pinpoint accurate about future states of the world, or getting a license for prophecy. Instead, it’s a matter of observing how people typically behave and what generally happens already. Observation matters more than speculation about the future.

We need less forecasting, and more recognition. That’s what Alucidate is set up to do.

2014-01-07T09:20:03+00:00 January 7, 2014|Base Rates, Decisions, Expertise, Outside View, Risk Management, Uncategorized|

When More Information Makes You Worse Off

How can more information make you worse off? The human mind has a tendency to jump to conclusions based on even very small amounts of irrelevant evidence. In fact, giving people obviously worthless but innocent additional information can throw people’s judgment off.

Let’s take an experimental example. One of the most important things ever written about decisions and blind spots is the collection of papers Judgment under Uncertainty: Heuristics and Biases edited hy Kahneman, Tversky and Slovic in 1982. You can’t get further than the second page of the first article (p4) before you come to this example.

Kahneman and Tversky ran an experiment to investigate the effect of base rates on prediction. They showed subjects some brief personality descriptions, supposedly chosen at random from a group of 100 engineers and lawyers. In one run, they told subjects the group had 30 engineers and 70 lawyers. In another run, they said there was 70 lawyers and 30 engineers. The experimental victims were asked to estimate what the chances were each of the chosen descriptions was an engineer or a lawyer.

If the subjects were only told that 70% of the group were lawyers, they correctly estimated the chances a person was a lawyer at 70%. But if they were given any information about personality, the subjects simply ignored the information about the proportions of the sample. They judged the case almost completely on whether the personality description matched the stereotype of the personality of an engineer or lawyer. (This is an example of representativeness bias. A correct Bayesian estimate should still incorporate prior probabilities. )

But even worse, when people were given obviously worthless information, they still  ignored the base rates altogether. Subjects were told:

“Dick is a 30 year old man. He is married with no children. A man of high ability and high motivation, he promises to be quite successful in his field. He is well liked by his colleagues. “

According to the researchers,

This description was intended to convey no information relevant to the question of whether Dick is an engineer or a lawyer. ..The subjects, however, judged the probability of Dick being an engineer to 0.5 regardless of whether the stated proportion of engineers in the group was .7 or .3. Evidently, people respond differently when given no evidence and when given worthless evidence. When no specific evidence is given, prior probabilities are properly utilized; when worthless evidence is given, prior probabilities are ignored.

What does this tell us?  Base rate neglect is  one of the most dangerous blind spots in decisions. And people will seize on invalid and misleading specific information given half a chance, especially if it is consistent with stereotypes or prior expectations. Even just giving people benign but  irrelevant information can make them lose track of base rates or other key features about a situation.

People have very serious blind spots when they have specific, particular information, like when they have just interviewed a primary source.

There’s an additional 515 pages of potential problems following that example in this book alone.

This discipline of looking at heuristics and biases is not the whole story. Some people have come to grief applying it in a formulaic way.  Its great virtue – and flaw – it it is mostly based on lab experiments, where people most often have deliberately limited relevant knowledge. The field has a tendency to want to replace human decision-makers with statistical models altogether.  There’s plenty of other essentials, like looking at the limits to expertise in real situations. But the Kahneman and Tversky Judgment-and Decision-Making (“JDM”) tradition  is an essential starting place.

 

 

 

2017-05-11T17:32:49+00:00 January 5, 2014|Base Rates, Decisions, Outside View, Psychology|

The Trouble with “Primary Sources”

Every profession has its blind spots. Often they arise from things which are a good thing in moderation, but turn into half-baked mush when overapplied.  Journalism is no exception.

The journalistic notion of “primary sources” is one of those things which contain a grain of truth. The tragedy is it also surrounded by so many potential sources of error and confusion that reliance on primary sources can actually make you much worse off than before.

Let’s take a very obvious example. If you go to any gym in America in the next week or two, of course you will find dozens of people who have made new year’s resolutions to get more exercise.  Choose one of them.  She tells you she’s going to get fit, and is going to be at the gym three times a week for the rest of the year. Choose three or four more. They say they are determined to lose at least ten or twenty pounds. They are going to stick to the new habit. This time it’s going to be different.

How much do you trust what they say? Let’s say you’re staking your livelihood on getting that call right. Let’s say to be really sure, you interview another hundred people, at length,  and 90% of them say they are completely committed to going to the gym three times a week all year.

You’ve now interviewed not just one, but a hundred and five primary sources, and almost all of them say they are going to get fit and go the gym three times a week. Congratulations! Brilliant work! Based on absolutely hard primary sources, and immense amounts of work, you now know the real story about what is going to happen.

Except you’re not feeling too good about this supposedly trustworthy, solid, eyewitness, consistent evidence at all. Why? If primary sources are the guarantee of accuracy, you ought to be extremely confident in saying America is going to be a lot less obese this time next year.

This is one of the rare cases in which one of the deepest problems in making and predicting decisions is more obvious than usual. Everyone knows that most people tend to slack off going to the gym after an initial burst of enthusiasm. It’s so obvious, it’s a cliche at this time of year. So you’re less inclined to trust what people tell you, even when they are being entirely sincere and not trying to mislead you.

In other words, the right way to call this case is to not just to meekly listen to what people say. Instead, you need to pay more attention to the base rate – the prior probability of the class of things you are looking at.

 

Dog Bites Man

Base-rate neglect is one of the most pervasive and serious blind spots of all in decision-making. People find it extremely difficult to get it right in cases which are less obvious than new year’s resolutions. It’s at the very root of Kahneman and Tversky’s research, which later led to behavioral economics, and is closely related to other major sources of error, such as representativeness bias and sample bias.

The trouble is  journalists are particularly inclined by their training and experience to ignore base rates. In fact, journalistic professional beliefs about “primary sources” make many of them overconfident and  prone to error. Everyone wants to have the special “Deep Throat” source that exposes the next Watergate. Base rates – the usual, non-newsworthy case – does not get your byline on the front page.  By definition, a good story is “man bites dog.” “Dog bites man” is the base rate.

It’s not that “primary sourcing” is in itself bad. Everyone knows second- or third- hand information is usually more prone to error. I’ve talked to hundreds of very senior primary sources in a range of countries over the years.  Discussions are highly useful, but for different reasons than many reporters would recognize.

A single-minded focus on primary sourcing often completely overshadows all the other potential problems with making and calling decisions. And many journalists are also very prone to falling for some of the worst of them, especially representative bias and availability bias. That is why senior decision-makers do not stake their careers on what they read in the newspapers.

This is also  just one of the extremely dangerous blind spots which surround “primary sources.” I’ll look in coming posts at some others.

 

 

 

2017-05-11T17:32:49+00:00 January 3, 2014|Assumptions, Base Rates, Decisions, Outside View|