Home/Outside View

The Right Level of Detachment

One of the keys to why decisions fail is that we often have a tendency to look for universal, across-the-board rules for what to do. Yet success most often comes from maintaining the right kind of balance between different rules or requirements. Remember, it’s usually the assumptions people make which are likely to cause them to lose money or fail.

In fact, one of the best ways to look for those blind spots and hidden assumptions is to look for the balances that have been ignored.

One of the most important of these is the right balance between looking at a situation in general and specific terms.

Take the last post about medical errors. A physician or surgeon has to be alert to the immediate details of a particular patient. You can’t ignore the specifics. You can’t assume distinctive individual characteristics are infernal from the general.

This happens a lot in applied or professional situations. Good reporters or detectives or  short-term traders also often tend to focus on “just the facts, ma’am”, and get impatient with anything that sounds abstract. Their eyes glaze over at anything which is not immediate and tangible. Foreign exchange traders have traditionally often known next to nothing about the actual underlying countries or economics in a currency pair, but are very attuned to short-term market conditions and sentiment.

But the essence of expertise is also being able to recognize more general patterns. The most seminal investigation of expertise was Chase and Simon’s description of “chunking” in 1973. They investigated chess grand masters, and found that they organized the information about pieces on the board into broader, larger, more abstract units. Years of experience gave grand masters a larger “vocabulary” of such patterns, which they quickly recognized and used effectively. More recent work also finds that

Experts see and represent a problem in their domain at a deeper (more principled) level than novices; novices tend to represent the problem at a superficial level. Glaser & Chi, quoted p.50, The Cambridge Handbook of Expertise and Expert Performance

Indeed, one of the biggest reasons experts fail is because they fail to use that more general knowledge: they get too close-up and engrossed in the particular details. They show off their knowledge of irrelevant detail and “miss the forest for the trees.” They tend to believe This Time Is Different.

This is why simple linear models most often outperform experts, because they weigh information in a consistent way without getting caught up in the specifics. It is also why taking what Kahneman calls the “outside view” and base rates are essential in most situations, including project management.  You have to be able to step back and think about what generally happens and that requires skill in perceiving similarities and analogies and patterns.

People’s mindsets often tend to push them to one extreme or the other. Too little abstraction and chunking? You get lost in specific facts and noise and don’t recognize patterns. You don’t represent the problem accurately, if you see it at all.  You get too close, and too emotive, and too involved. On the other hand, too much detachment, and you lose “feel” and color and tacit knowledge. You become the remote head office which knows nothing about front line conditions, or the academic who is committed to the theory rather than the evidence. You lose the ability to perceive contrary information, or see risks that do not fit within your theory.

You need the right balance of general and specific knowledge. But maintaining that kind of balance is incredibly hard, because people and organizations tend to get carried away in one direction or the other. Looking for signs of that can tell you what kind of mistaken assumptions or blind spots are most likely. Yet people hardly ever monitor that.

 

2017-05-11T17:32:42+00:00 December 15, 2014|Assumptions, Base Rates, Decisions, Expertise, Outside View|

What the Fox Knows

Paul Krugman is complaining about Nate Silver's new FiveThirtyEight venture and Silver's manifesto “What the Fox Knows“. Foxes, of course, know “many things” and are eclectic compared to hedgehogs who know “one big thing”. Silver argues for foxes.

Not impressed, says Krugman, who also claims to be a little fox-like because he works in several economic subdisciplines. He is nonetheless highly skeptical about fox-like claims. Hedgehog-like expertise must come first.

This is important, because it affects the way you see the entire policy process and the way decisions ought to be made. How much should you trust experts when they offer policy advice? According to Krugman:

… you can’t be an effective fox just by letting the data speak for itself — because it never does. You use data to inform your analysis, you let it tell you that your pet hypothesis is wrong, but data are never a substitute for hard thinking. If you think the data are speaking for themselves, what you’re really doing is implicit theorizing, which is a really bad idea (because you can’t test your assumptions if you don’t even know what you’re assuming.)

“Hard thinking”, is of course, in this case a euphemism for using standard neoclassical optimizing models in your analysis.

Krugman is essentially arguing about the importance of expertise and theory in economics. But there's three major problems.

First, Krugman assumes you gain knowledge by testing hypotheses against data, basically the Popperian ideal. Unfortunately, that's not what happens in practice. People usually instead just generate auxiliary hypotheses or declare surprising results “anomalous” and don't change their mind. They don't let data tell them their pet hypothesis is wrong. That's why in practical terms, for markets, it's usually more important to watch out for the ways people are not sensitive to evidence. I've learned in practice in order to make or anticipate decisions, looking at the (lack of) elasticity of people's opinions is what matters.

Second, of course economists have expertise on a variety of topics. A lot of bright people have spent decades studying particular issues. The trouble is subject expertise is always very domain-specific. Krugman himself says he has confidence knowing his way around macro data, but not health economics data or labor data.

It is clear that academic experts often stray well outside their domain in giving advice. They take the postage stamp of in-depth knowledge of their subfield and stretch it to cover an area the size of Manhattan. Krugman talks about labor markets all the time.

Academic economists also tend to be very good at developing publishable constrained-optimization models. That skill, however, is not transferable to most other domains, including the policy domain.

And unlike the structure of disciplines and academic departments, confronting most serious policy problems and decisions involve multiple domains and systems, which require different areas of expertise.

This is why reasonably serious decision requires multiple perspectives from different areas of expertise. It has to be weighed and integrated by skill in making decisions, not interior sub-disciplinary knowledge. This is why most organizations tend to want generalists, or people with a variety of experience, to make the most important decisions.

In practice, as I was arguing the other day, most important decisions are not made primarily using theory, but by successive small steps and induction. Actual decision-makers tend to be fox-like, because they have little choice otherwise. Hedgehog-like expertise is less relevant than hedgehogs like Krugman think.

Krugman is right that expecting data to speak for itself is naive, because there will be some implicit theory behind it. But being explicit about assumptions usually doesn't help much in economics, either, because the assumptions are mostly made for mathematical tractability or elegance. The most important assumptions are usually the ones that are so obvious people barely realize they are making them, and it requires a different, outside perspective to even see them. The assumptions you make explicit are not the ones you have to worry about.

Third, there is a deeper issue with expertise involving the relationship of the general and the particular. People tend to find it very easy to go from particular instances to general features (the moral of the story, the lessons of the crisis) but harder to go from the general to the particular (statistical frequencies to the chance an airliner will disappear). This is the problem of base rates, which I've talked a lot about before. People most often tend to ignore general features of a situation.

That also applies surprisingly often to experts, who usually do know the wider details. The reason experts usually perform so badly in predictions or practical decisions is because they ignore much of their knowledge of general features and show off with their knowledge of the particulars. And even when they do apply general features, it is most often done sloppily. Similarity and analogy and metaphor are subject to all kinds of misperception or misjudgment.

The result? Experts like Krugman usually overapply their knowledge unthinkingly in the wrong ways, but gang together in a guild-like way to attack those outside the tribe. They are usually overconfident and unaware of the limits of their expertise.

The overall upshot is foxes tend to make better decisions because they are more likely to allow evidence change their mind. So I'm sympathetic to Nate Silver's position. Krugman is largely defending his own turf, along with other economists he cites. A PhD in Economics tends to attract people with extreme hedgehog dispositions and marks them afterwards.

How people react to evidence is usually more of a guide to outcomes than the evidence itself.

2017-05-11T17:32:45+00:00 March 19, 2014|Assumptions, Base Rates, Decisions, Expertise, Foxes and Hedgehogs, Outside View|

How to go over budget by a billion dollars and other planning catastrophes

I was talking in the last post about base rate neglect. People have a tendency to seize on vivid particular features of a situation to the exclusion of general features. Journalists are particularly prone to this problem, because they have to find vivid details to sell newspapers.

Let’s take a (literally) concrete example: infrastructure spending.  Over $2 trillion has been added to extra infrastructure stimulus spending since the global crisis broke out in 2008, including massive fiscal injections in the US, China and India. So if there are widespread decision errors even as large as just 1% or 5% in this area, the consequences easily run into tens of billions of dollars.

In fact, infrastructure planners massively underestimate the cost of major projects, often by more than 50%. The average cost overrun on major rail projects is 44%, according to recent research:

The dataset .. shows cost overrun in 258 projects in 20 nations on five continents. All projects for which data were obtainable were included in the study. For rail, average cost overrun is 44.7 per cent measured in constant prices from the build decision. For bridges and tunnels, the equivalent figure is 33.8 per cent, and for roads 20.4 per cent. ..

  • nine out of 10 projects have cost overrun;
  • overrun is found across the 20 nations and five continents covered by the study;
  • overrun is constant for the 70-year period covered by the study; cost estimates have not improved over time.

And planners  hugely overestimate the benefits of major projects.

For rail, actual passenger traffic is 51.4 per cent lower than estimated traffic on average. This is equivalent to an average overestimate in rail passenger forecasts of no less than 105.6 per cent. ..

The following observations hold for traffic demand forecasts:

  • 84 per cent of rail passenger forecasts are wrong by more than ±20 per cent
  • nine out of 10 rail projects have overestimated traffic;
  •  50 per cent of road traffic forecasts are wrong by more than ±20 per cent;

The data is from a 2009 paper in the Oxford Review of Economic Policy by Bent Flyvberg: Survival of the unfittest: why the worst infrastructure gets built—and what we can do about it.

Just consider for a moment  what the data mean for the way policy decisions are taken. These are some of the biggest decisions politicians and planners make, and they are systemically biased and inaccurate.  Massive cost overruns are the norm.

The outcome is not just billions of dollars wasted, but, as Flyvbjerg argues, the worst projects with the most serious judgmental errors are the ones that tend to get built. This is pervasive in almost every country in the world.

Promoters overestimate benefits and underestimate costs in order to get approval. Of course they do, you might say. People are often overconfident and oversell things. But this does not get taken into account in the decisions. Recall again: “overrun is constant for the 70-year period covered by the study; cost estimates have not improved over time.”

What can be done about it? According to Flyvbjerg,

If project managers genuinely consider it important to get estimates of costs, benefits, and risks right, it is recommended they use a promising new method called ‘reference class forecasting’ to reduce inaccuracy and bias. This method was originally developed to compensate for the type of cognitive bias in human forecasting that Princeton psychologist Daniel Kahneman found in his Nobel prize-winning work on bias in economic forecasting (Kahneman, 1994; Kahneman and Tversky, 1979). Reference class forecasting has proven more accurate than conventional forecasting. It was used in project management in practice for the first time in 2004 (Flyvbjerg and COWI, 2004); in 2005 the method was officially  endorsed by the American Planning Association (2005); and since then it has been used by governments and private companies in the UK, the Netherlands, Denmark, Switzerland, Australia, and South Africa, among others.

I’ve talked about Kahneman’s outside view before.  As you can see, it’s just another way of saying: don’t ignore the base rate. Adjust your estimates with reference to the general  class of things you are looking at, not just the specific particulars of a situation. It’s seldom easy: another way to describe this is “the planning fallacy” and it applies to just about any major project (including starting a company!).

Here’s what to take away from this: this one particular blind spot is one of the most serious errors in public policy decision-making, with billions of dollars wasted every year.

But it’s not a matter of having to be pinpoint accurate about future states of the world, or getting a license for prophecy. Instead, it’s a matter of observing how people typically behave and what generally happens already. Observation matters more than speculation about the future.

We need less forecasting, and more recognition. That’s what Alucidate is set up to do.

2014-01-07T09:20:03+00:00 January 7, 2014|Base Rates, Decisions, Expertise, Outside View, Risk Management, Uncategorized|

When More Information Makes You Worse Off

How can more information make you worse off? The human mind has a tendency to jump to conclusions based on even very small amounts of irrelevant evidence. In fact, giving people obviously worthless but innocent additional information can throw people’s judgment off.

Let’s take an experimental example. One of the most important things ever written about decisions and blind spots is the collection of papers Judgment under Uncertainty: Heuristics and Biases edited hy Kahneman, Tversky and Slovic in 1982. You can’t get further than the second page of the first article (p4) before you come to this example.

Kahneman and Tversky ran an experiment to investigate the effect of base rates on prediction. They showed subjects some brief personality descriptions, supposedly chosen at random from a group of 100 engineers and lawyers. In one run, they told subjects the group had 30 engineers and 70 lawyers. In another run, they said there was 70 lawyers and 30 engineers. The experimental victims were asked to estimate what the chances were each of the chosen descriptions was an engineer or a lawyer.

If the subjects were only told that 70% of the group were lawyers, they correctly estimated the chances a person was a lawyer at 70%. But if they were given any information about personality, the subjects simply ignored the information about the proportions of the sample. They judged the case almost completely on whether the personality description matched the stereotype of the personality of an engineer or lawyer. (This is an example of representativeness bias. A correct Bayesian estimate should still incorporate prior probabilities. )

But even worse, when people were given obviously worthless information, they still  ignored the base rates altogether. Subjects were told:

“Dick is a 30 year old man. He is married with no children. A man of high ability and high motivation, he promises to be quite successful in his field. He is well liked by his colleagues. “

According to the researchers,

This description was intended to convey no information relevant to the question of whether Dick is an engineer or a lawyer. ..The subjects, however, judged the probability of Dick being an engineer to 0.5 regardless of whether the stated proportion of engineers in the group was .7 or .3. Evidently, people respond differently when given no evidence and when given worthless evidence. When no specific evidence is given, prior probabilities are properly utilized; when worthless evidence is given, prior probabilities are ignored.

What does this tell us?  Base rate neglect is  one of the most dangerous blind spots in decisions. And people will seize on invalid and misleading specific information given half a chance, especially if it is consistent with stereotypes or prior expectations. Even just giving people benign but  irrelevant information can make them lose track of base rates or other key features about a situation.

People have very serious blind spots when they have specific, particular information, like when they have just interviewed a primary source.

There’s an additional 515 pages of potential problems following that example in this book alone.

This discipline of looking at heuristics and biases is not the whole story. Some people have come to grief applying it in a formulaic way.  Its great virtue – and flaw – it it is mostly based on lab experiments, where people most often have deliberately limited relevant knowledge. The field has a tendency to want to replace human decision-makers with statistical models altogether.  There’s plenty of other essentials, like looking at the limits to expertise in real situations. But the Kahneman and Tversky Judgment-and Decision-Making (“JDM”) tradition  is an essential starting place.

 

 

 

2017-05-11T17:32:49+00:00 January 5, 2014|Base Rates, Decisions, Outside View, Psychology|

The Trouble with “Primary Sources”

Every profession has its blind spots. Often they arise from things which are a good thing in moderation, but turn into half-baked mush when overapplied.  Journalism is no exception.

The journalistic notion of “primary sources” is one of those things which contain a grain of truth. The tragedy is it also surrounded by so many potential sources of error and confusion that reliance on primary sources can actually make you much worse off than before.

Let’s take a very obvious example. If you go to any gym in America in the next week or two, of course you will find dozens of people who have made new year’s resolutions to get more exercise.  Choose one of them.  She tells you she’s going to get fit, and is going to be at the gym three times a week for the rest of the year. Choose three or four more. They say they are determined to lose at least ten or twenty pounds. They are going to stick to the new habit. This time it’s going to be different.

How much do you trust what they say? Let’s say you’re staking your livelihood on getting that call right. Let’s say to be really sure, you interview another hundred people, at length,  and 90% of them say they are completely committed to going to the gym three times a week all year.

You’ve now interviewed not just one, but a hundred and five primary sources, and almost all of them say they are going to get fit and go the gym three times a week. Congratulations! Brilliant work! Based on absolutely hard primary sources, and immense amounts of work, you now know the real story about what is going to happen.

Except you’re not feeling too good about this supposedly trustworthy, solid, eyewitness, consistent evidence at all. Why? If primary sources are the guarantee of accuracy, you ought to be extremely confident in saying America is going to be a lot less obese this time next year.

This is one of the rare cases in which one of the deepest problems in making and predicting decisions is more obvious than usual. Everyone knows that most people tend to slack off going to the gym after an initial burst of enthusiasm. It’s so obvious, it’s a cliche at this time of year. So you’re less inclined to trust what people tell you, even when they are being entirely sincere and not trying to mislead you.

In other words, the right way to call this case is to not just to meekly listen to what people say. Instead, you need to pay more attention to the base rate – the prior probability of the class of things you are looking at.

 

Dog Bites Man

Base-rate neglect is one of the most pervasive and serious blind spots of all in decision-making. People find it extremely difficult to get it right in cases which are less obvious than new year’s resolutions. It’s at the very root of Kahneman and Tversky’s research, which later led to behavioral economics, and is closely related to other major sources of error, such as representativeness bias and sample bias.

The trouble is  journalists are particularly inclined by their training and experience to ignore base rates. In fact, journalistic professional beliefs about “primary sources” make many of them overconfident and  prone to error. Everyone wants to have the special “Deep Throat” source that exposes the next Watergate. Base rates – the usual, non-newsworthy case – does not get your byline on the front page.  By definition, a good story is “man bites dog.” “Dog bites man” is the base rate.

It’s not that “primary sourcing” is in itself bad. Everyone knows second- or third- hand information is usually more prone to error. I’ve talked to hundreds of very senior primary sources in a range of countries over the years.  Discussions are highly useful, but for different reasons than many reporters would recognize.

A single-minded focus on primary sourcing often completely overshadows all the other potential problems with making and calling decisions. And many journalists are also very prone to falling for some of the worst of them, especially representative bias and availability bias. That is why senior decision-makers do not stake their careers on what they read in the newspapers.

This is also  just one of the extremely dangerous blind spots which surround “primary sources.” I’ll look in coming posts at some others.

 

 

 

2017-05-11T17:32:49+00:00 January 3, 2014|Assumptions, Base Rates, Decisions, Outside View|

Raise the champagne (but not until you learn the lessons of 2013)

We’re all one year older as we reach the end of 2013, but are we one year wiser?  The end of the year is a time to look back and see if there are any ways to do things better next year.

For the hedge fund industry, there just has to be new approaches to sustain long-term survival.  The industry had a very bad year in 2013, at least measured in terms of investment returns.  The average “smart money” hedge fund made 8.2% and charged 200bp for that positive performance. But if you put your money in the dumbest, cheapest global equity index fund, you made almost 21%, and got charged less than a tenth of the fees for it.

So it is essential to learn from the experience of a rough year for the industry. Clearly, it’s difficult to improve or turn things round without trying to learn from outcomes and mistakes.

The problem  is it is also often extremely difficult to learn from experience, for a range of different reasons. There are multiple serious blind spots in individual and organizational learning.

One bedrock theme for me is insight about macro policy, like the Fed, or market behavior and opportunities for outperformance, are basically about sensitivity to evidence. People’s assimilation and “stretchiness” in response to evidence and events is actually more important than the objective underlying evidence itself. So paying very close attention to how and when and why people learn from experience is essential.

The most obvious issue is people often just prefer to move on to the next thing. They are reluctant to review outcomes or try to learn from them at all.  Raise the champagne anyway. 2013 is already history.  At least next year might be better. (It ought to be a little bit better ,  just because of regression to the mean.)

Markets have remarkably short memories. Policymakers have remarkably selective ones.

The Person and the Situation

But here’s one deeper problem. Who or what do you blame for bad outcomes?  If something goes wrong (like horrendous investment returns or policy errors),  do you blame the person or the situation?  There are some frequent deep distortions here, so much so that social psychologists call it the “fundamental attribution error.” (see the classic analysis by Nisbett and Ross in Human Inference: Strategies and Shortcomings in Social Judgement).

People pay too much attention to the constraints and headwinds in their situation when it comes to explaining their  own success or failure.

So if a fund manager has a bad year, it’s because of the situation. Blame the Fed and Congress and the tough challenges in the markets. You can expect to read variations on that in a lot of fund manager reports to their clients on 2013. This also means you don’t have to think much about potential mistakes or errors you might have made.  If you don’t recognize mistakes, you can’t correct them.

But if you had a good year, it’s all because of your own talent and skill and effort. This also means you don’t have to think much about the drivers of the situation or the prior likelihood of success.

In sharp contrast, people also believe that other people’s success or failure is almost entirely due to their personal qualities or dispositions,  and not their situation. We pay too little attention to situational factors in other people’s decisions.   You will read plenty of variations on that in comments on Bernanke’s tenure at the Fed when he steps down in  few weeks. It will be all about Bernanke’s skills and judgments, with less attention to the situation in which he found himself. Hedge fund clients will be much less sympathetic to claims that poor performance was because of a tough general situation.

It pays to be aware of when and why people are invoking the person or the situation as explanations for outcomes. And it is critical to observe  signs of how they learn and adapt.

 

 

 

 

When looking at “bias” is not enough

How do people miss the crucial factors that can destroy their companies, or projects, or policies?  One important answer is bias, which includes problems like the availability heuristic  or anchoring. Those are analyzed by the Judgment and Decision-Making (JDM) field in psychology, and its offshoot Behavioral Economics.  Most often, these disciplines look at departures from normative rationality in lab experiments.

But if you look around your own company or organization, it takes you less than three seconds to realize that culture is just as important, and it does not fit into behavioral economics.  Culture is the basic fabric of how organizations think and act. It is why mergers and acquisitions so often go wrong, or why companies fail to adapt to change.

In fact, there is a very different field of research into company culture and its many potential blindspots. The most important classic work is Edgar Schein’s Organizational Culture and Leadership.

Culture, says Schein, is the accumulated shared learning of a group. It is created as the group confronts and initially solves its basic problems of survival. As those courses of action

.. continue to be successful in solving the group’s internal and external problems, they come to be taken for granted and the assumptions underlying them cease to be questioned or debated. A group has a culture when it has had enough of a shared history to have formed such a set of shared assumptions.

 

Then it  becomes so taken for granted that any attempt to change it can create high anxiety.

Rather than tolerating such anxiety levels we tend to want to perceive the events around us as congruent with our assumptions, even if it that means distorting, denying projecting or in other ways falsifying to ourselves what may be going on around us.  It is in this psychological process that culture has its ultimate power. Culture as a set of basic assumptions defines for us what to pay attention to, what things mean, how to react emotionally to what is going on, and what actions to take in various kinds of situations.

Once we have developed an integrated set of such assumptions, what might be called a thought world or mental map, we will be maximally comfortable with others who share the same set of assumptions and very uncomfortable and vulnerable in situations where different assumptions operate either because we will not understand what is going on, or , worse, misperceive and misinterpret the actions of others.

[my bold]

Just think about the potential for damage and error, and how often you have likely seen problems like this in your own experience.

The problem is major decisions most often go wrong not because you get the information or calculations wrong, but because you get the assumptions wrong. National intelligence is a prime example.  The news is full of NSA success in gathering data. But it is making sense of the data which is the real challenge.

Culture is one of the most important reasons  decision-makers frequently fail to ask the critical questions in the first place. Schein discusses which assumptions are usually most important, how new hires absorb culture, how leaders try to alter it, and how culture changes through the evolution of firms from start-ups to declining corporate monoliths.

You can’t escape shared assumptions and culture, he says, nor would you want to. Without shared assumptions there can be no group, just a collection of people. But almost by definition, it also means people are incapable of seeing many of the most critical problems from within a particular company culture.  Organizations find it hard to learn.

 

Looking for the Outside View

We need better ways to look at economic decisions. You can get better information: but most decisions go wrong these days despite having all the right information to hand.  You can ask experts, or seek better forecasters:  but the evidence is that expert prediction is deeply flawed, as we saw in the last post, and almost always outperformed by simple models to weigh evidence.

In fact, the reason the models work better than most expert judgment is more basic.  Simple models are just one way to ensure that decisions take into account the outside view. I’ve talked about this before, for example here.  The concept comes from a paper by Daniel Kahneman and Dan Lovallo in 1993, Timid Choices and Bold Forecasts: A Cognitive Perspective on Risk Taking. They say:

The inside and outside views draw on different sources of information, and apply different rules to its use. An inside view forecast draws on the specifics of the case, the details of the plan that exists, some ideas about likely obstacles and how they maybe overcome. In an extreme form, the inside view involves an attempt to sketch a representative scenario that captures the essential elements of the history of the future. In contrast, the outside view is essentially statistical and comparative, and involves no attempt to divine the future at any level of detail.

It should be obvious that when both methods are applied with equal intelligence and skill, the outside view is much more likely to yield a realistic estimate.

Kahneman often goes too far, however, in thinking only statistical models have reliability and dismissing human decision-makers. (I’ll come back to this.)  Instead, the essence of the outside view is not so much quantitative, as being able to see what is typical, to recognize what is usual and unusual.  It is being able to have the right degree of skepticism about claims that “this time it’s different.” It’s being sensitive to base rates. It’s having the right degree of detachment. It’s about expecting regression to the mean. It’s about having a feel for regularities as well as change.  That most often means expressing things in a common numerical measure. But it can mean analogy and history and contrast as well.

2017-05-11T17:32:51+00:00 September 27, 2013|Decisions, Outside View|

Awareness can’t be modeled

One of the fundamental business needs is awareness, an ability to look at a situation in a fresh and perceptive way, to retest assumptions and look for anomalies. Models and forecasts often trap people into elaborate mechanisms that grow remote from reality. You can have the best analytics in the world, but if you’re asking the wrong question you are also asking for disaster.

Alucidate looks for crucial information. But we are much more about using it to ask the right questions, by comparing different perspectives.

Here’s the former head of market risk at Merrill , interviewed in the NYT. He was a PhD in Physics, and a leading early figure in the quant influx onto Wall St. He eventually learned to look for the human factors.

But the numbers more often disguise risk than reveal it. “I went down the statistical path,” he said. He built one of the first value-at-risk models, or VaR, a mathematical formula that is supposed to distill how much risk a firm is running at any given point. …

Instead of fixating on models, risk managers need to develop what spies call “humint” — human intelligence from flesh and blood sources. They need to build networks of people who will trust them enough to report when things seem off, before they become spectacular problems.

Like Emmanuel Derman, the head quant at Goldman we looked at here, he thinks people trust too much in the math in isolation. As for the VaR measures he helped pioneer?

In Mr. Breit’s view, Wall Street firms, encouraged by regulators, are on a fool’s mission to enhance their models to more reliably detect risky trades. Mr. Breit finds VaR, a commonly used measure, useful only as a contrary indicator. If VaR isn’t flashing a warning signal for a profitable trade, that may well mean there is a hidden bomb.

The best way to get awareness is to talk to someone with an informed but outside, different view. In real decisions, human factors like discerning someone else’s motivations and intentions, and how they will react in a crisis, are the essentials.

The President is without a doubt sitting in the White House this morning wishing he had more intel on what the North Korean leaders’ intentions are. Hundreds of billions of dollars of satellites and radar are of only limited help in that.

 

 

Never ignore Base Rates (and Come out of the Bunker)

I mentioned the “base rate” at the end of the last post .  It’s one of the most important revelations to come out of recent decision research, and it’s relevant to those Stockman predictions of imminent disaster for the economy. It means if you are already inside your bunker in Montana with three years supply of canned food because you think the economy is finished, you can come out again.

The base rate is how often something occurs for the general class of things you’re looking at, rather than looking only at the specific situation.

Like many good ideas, it is also obvious common sense when you think about it.  If you see someone parading around with a placard saying “The End of The World Is Nigh”, you are (probably) not going to panic. You’ve heard of plenty of cults and lunatics who regularly proclaim the world is going to end. It hasn’t come true so far.

So you calmly take the dispassionate “outside” view of how often such things typically occur, and forget about it. You don’t instantly take the “inside” view of nervously scurrying over to ask your new friend with the placard all the specific reasons why he think the oceans are going to boil and the sky is going to fall down.  Someday he may be right. But unless Mr. Placard turns out to be the Director of the Hubble Space Telescope with a photo of a large approaching asteroid, you’re not going to lose sleep over it.

In the same way, everyone with an IQ higher than a turnip  knows from recent experience massive financial crises occur sometimes. But the base case is the equity market returns 8-10% a year over long stretches of time, despite massive crashes that happen once a generation. Most times when people advise you to sell everything and buy gold bars, it is going to be wrong, even if we have some recent experience that makes us more jumpy.

Extrapolating the base case is usually more accurate than most expert predictions. You have to be careful about “this time it’s different” arguments.

Careful, yes, but not wholly dismissive either.

Things can be different. You just have to be more skeptical about the possibility, and dig into the factors that may explain the change.

A great deal has been written about base rates, including the (very good) newly published book by the Heath Brothers, Decisive: How to Make Better Choices in Life and Work, They advocate “zooming out” when you make a decision, to see the bigger picture and typical outcomes, as well as then “zooming in” to consider any idiosyncratic factors. There’s also extensive discussion in Thinking, Fast and Slow. 

So what does this mean for worrying about Stockman’s predictions of doom? Sure, it’s worth thinking through his arguments to see if there is something important we’ve overlooked or missed out. The biggest single mistake people make in decisions is to get their premises wrong.  And when a prediction is so colorful and vivid and well-written, it’s irresistible to the press and bound to get attention.

But a highly colorful, vivid story is itself a red flag. Stockman has been making much the same case since 1981. US real GDP has doubled in the meantime.

Media enthusiasm and big stories is usually a contrary indicator, like Business Week’s infamous Death of Equities story in August 1979.

The bond market is bound to have some difficult days ahead, as I’ve argued. But that does not mean the Fed has “gone rogue” and the Don Stockman/ Ron Paul view of the world is correct.

And if you’ve been in a bunker with canned food and invested in cash since 1981 or 1979.. well, you might as well hide back in the bunker.

 

2017-05-11T17:32:57+00:00 April 2, 2013|Crisis Management, Decisions, Financial Crisis, Outside View, Risk Management|