Home/Uncategorized

Website Redesign is live

The other pages on the website have been simplified and redesigned. There used to be six different explanatory pages. That has been reduced to two, in order to make it clearer.  There are also other major projects in process, which the redesign will ultimately feed into.

2017-05-24T12:04:08+00:00 May 24, 2017|Uncategorized|

From elation to catastrophe in Downing St

As a drama it is extraordinary. Cameron had thought he was going to win the Brexit referendum by ten points, because his pollster told him so on the afternoon of the referendum. Seventeen hours later he was on the steps outside giving his resignation speech.

At around 3pm, Cameron’s team took a phone call which made them convinced that victory was in the bag.

Lord Cooper, a co-founder of the Populus polling company and the architect of the PM’s policy on gay marriage, called to say he thought the margin of victory for Remain would be 60/40. A few hours later, Populus published its final poll of the campaign – giving Remain a commanding ten-point lead.

The result, of course, was 52:48 against instead. It's like a Greek tragedy: hubris followed by perepeteia, a brutal turning of one thing into its opposite, all in the space of a few hours.

 

2017-05-11T17:32:40+00:00 June 25, 2016|Uncategorized|

Experts “adamantly retain their views” even when proved wrong

I claimed in the last post that people find it very difficult to learn from mistakes. if anything, they tend to hold on to their opinions even more strongly when challenged. So what happens when forecasters are confronted with evidence like Tetlock’s explosive study that shows expert prediction in economics and politics and foreign policy is no better than dart-throwing chimps?

When Tetlock confronted the experts with evidence for their inaccurate predictions, they were unfazed. They explained away the prediction failures rather than trying to improve their mental models. They showed the typical symptoms of fixation. Experts attributed their successes to their own skilled judgment, whereas they blamed their failures on bad luck or task difficulty. The political scientists who predicted a resounding victory for Al Gore in 2000, with confidence estimates that ranged from 85 percent to 97 percent, found no reason to doubt their quantitative equations. The problem, they claimed, was that their models had been fed some misleading economic data. Other modelers blamed the Bill Clinton-Monica Lewinsky scandal and pointed out that their models didn’t include variables for promiscuous presidents. In short, the experts not only failed to demonstrate forecasting accuracy but adamantly retained their views in the face of contrary evidence. Tetlock and others have shown that we can’t trust the so-called experts.

Gary Klein, Streetlights and Shadows: Searching for the Keys to Adaptive Decision Making

This is a variety of the fundamental attribution error, of course. It shouldn’t really surprise us. We all know examples of people who refuse to change their minds in daily life, and organizations which discount every threat until it is too late. Much the same happens even in natural science.  It’s everywhere. People mostly don’t learn from getting predictions wrong. They make excuses or double down.

It also is only compatible with survival if there’s no downside risk to being wrong. It’s a strategy of hoping people forget.

2015-05-13T16:41:40+00:00 May 14, 2015|Uncategorized|

The Difference between Puzzles and Mysteries

Companies and investment firms go extinct when they fail to understand the key problems they face. And right now, the fundamental nature of the problem many corporations and investors and policymakers face has changed. But mindsets have not caught up.

Ironically, current enthusiasms like big data can compound the problem. Big data,  as I’ve argued before, is highly valuable for tacking some kinds of problems, when you have very large amounts of data of essentially similar, replicable events. Simple algorithms and linear models also beat almost every expert in many situations, largely because they are more consistent.

The trouble is many of the most advanced problems are qualitatively different. Here’s an argument by Gregory Treverton, who argues there is a fundamental difference between ‘puzzles’ and ‘mysteries.’

There’s a reason millions of people try to solve crossword puzzles each day. Amid the well-ordered combat between a puzzler’s mind and the blank boxes waiting to be filled, there is satisfaction along with frustration. Even when you can’t find the right answer, you know it exists. Puzzles can be solved; they have answers.

But a mystery offers no such comfort. It poses a question that has no definitive answer because the answer is contingent; it depends on a future interaction of many factors, known and unknown. A mystery cannot be answered; it can only be framed, by identifying the critical factors and applying some sense of how they have interacted in the past and might interact in the future. A mystery is an attempt to define ambiguities.

Puzzles may be more satisfying, but the world increasingly offers us mysteries. Treating them as puzzles is like trying to solve the unsolvable — an impossible challenge. But approaching them as mysteries may make us more comfortable with the uncertainties of our age.

Here’s the interesting thing: Treverton is former Head of the Intelligence Policy Center at RAND, the fabled national security-oriented think tank based in Santa Monica, CA, and before that Vice Chair of the National Intelligence Council.  RAND was also arguably the richly funded original home of the movement to inject mathematical and quantitative rigor into economics and social science, as well as one of the citadels of actual “rocket science” and operations research after WW2.  RAND stands for hard-headed rigor, and equally hard-headed national security thinking.

So to find RAND arguing for the limits of “puzzle-solving” is a little like finding the Pope advocating Buddhism.

The intelligence community was focused on puzzles during the Cold War, Treverton says. But current challenges fall into the mystery category.

Puzzle-solving is frustrated by a lack of information. Given Washington’s need to find out how many warheads Moscow’s missiles carried, the United States spent billions of dollars on satellites and other data-collection systems. But puzzles are relatively stable. If a critical piece is missing one day, it usually remains valuable the next.

By contrast, mysteries often grow out of too much information. Until the 9/11 hijackers actually boarded their airplanes, their plan was a mystery, the clues to which were buried in too much “noise” — too many threat scenarios.

The same applies to financial market and business decisions. We have too much information. Attention and sensitivity to evidence are now the prime challenge facing many decision-makers. Indeed, that has always been the source of the biggest failures in national policy and intelligence.

It’s partly a consequence of the success of many analytical techniques and information gathering exercises. The easy puzzles, the ones susceptible to more information and linear models and algorithms ,  have been solved, or at least automated. That means it’s the mysteries and how you approach them that move the needle on performance.

 

2017-05-11T17:32:42+00:00 October 10, 2014|Assumptions, Big Data, Decisions, Lens Model, Security, Uncategorized|

How “decision science” often fails

Most of the value in the economy now comes not from making things, but from making decisions. Most managers , professionals and policymakers are in essence paid to exercise judgment and choose the best option in ambiguous situations.

It's also very clear that decisions by smart, trained, able people often turn out extremely badly. Corporations go bankrupt. Professionals miss or ignore debacles like Enron, Lehman and Madoff. Policymakers loaded down with advanced credentials manage to produce the worst financial crisis since the Great Depression, or spend a trillion dollars on a futile war to reshape the Middle East.

Some of that ocean of error is because of bad luck. Some of it is because of complexity. Some of it is an understandable failure to develop a gift for prophecy.

But most of it is because people simply don't notice contrary information or test assumptions. They misperceive, misunderstand, and fail to communicate. They fail to learn.

So how do we do better? There could hardly be a more important question. A necessary starting point is the research on decision-making.

One problem is there was surprisingly little attention to how people make decisions in practice for many decades. In fact, there was little attention or research into decision-making at all, at least outside military strategy, until WW2. Companies thought about administration and organizational charts, rather than decisions themselves.

Then two major developments transformed that situation. One was personal experience, and the other a major theoretical breakthrough.

The experience was the huge success of planning and mathematical approaches during World War 2. Many of the leading theorists and decision-makers who dominated thinking in the 1950s and 1960s worked in logistics or military planning during the war, especially units devoted to the new discipline of “Operational Research.”

Many officers also came back from the war convinced that efficient planning and organizational hierarchy had produced victory. That conviction carried over into corporate America in the postwar world.

The theoretical breakthrough was Von Neumann and Morgenstern's formalization of expected utility theory in The Theory of Games and Economic Behavior in 1947. They produced a consistent framework based on seven axioms. Around the same time Paul Samuelson was formalizing much of economics. So economists had a powerful theory which specified how decisions ought to be made.If you've been trained in economics, you probably have had little exposure to any way of analyzing decisions other than expected utility.

This microeconomic approach also developed into the sub-discipline which calls itself “decision science”. Researchers developed increasingly complex multi-attribute utility analysis models for use in corporations, government and the military. Immense resources of time and money were devoted to translating problems and processing them in the computer models.

The rational choice approach reached its maximum scholarly and cultural dominance in the 1960s. People believed that the major challenges of society would quickly yield to expert analysis, from the Vietnam War to the Great Society.

The 1960s hyper-rational Harvard Business School approach also produced the culture and thinking of the major consulting firms like McKinsey, whose business was based on the idea a super-smart and rational MBA graduate could solve business problems in a few weeks that had defeated dumber CEOs with forty years' practical experience. Economics and Finance are still centered around the expected utility model.

But the 1960s also produced major studies which showed that people didn't make decisions in a rational-choice way, they couldn't make decisions that way, and if they tried it often produced worse outcomes. Planning fell from favor. (I'll come back to this.)

In fact, there is no evidence that the elaborate expected utility approach produces better decisions. One of the primary books in the rational choice tradition is Reid Hastie and Robyn Dawes' Rational Choice in an Uncertain World: The Psychology of Judgment and Decision Making (which is an excellent book.) They are disturbed by research (most specifically in 2006 ) which apparently shows unconscious or intuitive judgment outperforms formal rational choice.

These results are troubling to people, like the authors of this book, who believe that there are important advantages to systematic, controlled, deliberate and relatively conscious choice strategies. .. In fact, we cannot cite any research-based proofs that deliberate choice habits in practical affairs are better than going with your gut intuitions. (p232).

This most specifically refers to the recent “adaptive” or “fast and frugal” alternative decision-making research. But there are limits to the applicability of “gut feelings” and intuition as well, which have to do with the limits of expertise and the relative advantages of “System 1” and “System 2” thinking.

Ever since the 1960s, different disciplines keep rediscovering that people do not make decisions in a rational-choice, expected-utility way in practice, often for good reasons. But much of economics, finance and risk management still either assumes they do, or they ought to.

 

 

2017-05-11T17:32:48+00:00 March 6, 2014|Big Data, Decisions, Uncategorized|

Why is inflation so weak?

I was talking about how economic change may be affecting inflation dynamics last week. It’s not just the US.  Here’s more evidence: “What killed Canada’s Inflation?” asks this article in the Globe and Mail, as analysts puzzle over why local price rises continue to be so dormant.

It’s the great conundrum of the past year. Canada’s annual rate of inflation has sat below the 1.5-per-cent mark for 18 months in a row, the longest such stretch since the late 1990s. Month after month, it’s confounded forecasts and is now running at 1.2 per cent.

And though we may have hit a trough in inflation, most economists don’t expect it to come roaring back any time soon.

The Bank of Canada is a bit puzzled too, in its MPR published on 22 January.

Our analysis suggests that the weakness in total CPI inflation in advanced economies has a large, common international component that is likely associated with declines in global prices for food and energy. The weakness in core inflation, on the other hand, is only partly explained by a common global factor, despite widespread excess capacity in the global economy since 2009. In this regard , it is somewhat puzzling that the rate of inflation has been declining only recently.

It may be that inflation responds more to persistent output gaps, and with a considerable lag. Country-specific factors may also be important in explaining the recent movements in core inflation. These factors vary across countries an include greater retail competition, the waning impact of previous increases in taxes or regulated prices, and low wage pressures.

The same disinflationary forces appear to be at play in the Eurozone.

There is widespread puzzlement among many central bankers  – and much hangs on that uncertainty. For the time being, it means the Fed and other central banks can be relaxed about raising interest rates, There is no obvious inflation pressure at all, and the Fed is tapering largely as a first tiny step towards normalization. That reduces the chance of a major shock to fixed-income markets.

But lack of understanding of the current dynamics of inflation also makes surprises more likely, both on the upside and downside. If, as the Bank of Canada suggests, inflation is still responding with a lag to the depth of the previous output gap, that may wear off at an unknown rate. Or perhaps there may be permanent hysteresis effects. We don’t know yet.

The most worrying thing is the lingering downside possibility of deflation and a Japan-style decades-long slump. In practical terms, data and explanations for disinflationary pressure will be a more important driver for the Fed and other majors than labor market factors.  Despite recent volatility, I am moderately optimistic about the US outlook. Significant deflationary pressure would change that.

2017-05-11T17:32:48+00:00 February 4, 2014|Central Banks, Europe, Federal Reserve, Inflation, Monetary Policy, Uncategorized|

Lack of Fall-back Plans and Inertia

One of the most important traps that afflicts decision makers is a failure to generate enough alternatives. People often see things in purely binary terms – do X or don’t do X – and ignore other options which may solve the problem much better. They fail to look for alternative perspectives.

This is one kind of knock-on effect from the tendency of policymakers to ignore trade-offs that I mentioned in this post on intelligence failure last week,  To continue the point, one consequence of ignoring trade-offs is leaders frequently fail to develop any fallback options as well. And that can lead to trillion-dollar catastrophes.

The same factors that lead decision makers to underestimate trade-offs make them reluctant to develop fallback plans and to resist information that their policy is failing. The latter more than the former causes conflicts with intelligence, although the two are closely linked. There are several reasons why leaders are reluctant to develop fallback plans. It is hard enough to develop one policy, and the burden of thinking through a second is often simply too great. Probably more important, if others learn of the existence of Plan B, they may give less support to Plan A. .. The most obvious and consequential recent case of a lack of Plan B is Iraq. (from Why Intelligence Fails)

The need to sell a chosen option often blinds people to alternatives, and develops a life of its own. Policy choices pick up their own inertia and get steadily harder to change.

Leaders tend to stay with their first choice for as long as possible. Lord Salisbury, the famous British statesman of the end of the nineteenth century, noted that “the commonest error in politics is sticking to the carcasses of dead policies.” Leaders are heavily invested in their policies. To change their basic objectives will be to incur very high costs, including, in some cases, losing their offices if not their lives. Indeed the resistance to seeing that a policy is failing is roughly proportional to the costs that are expected if it does.

Decisions problems are pervasive, and you can’t really make sense of events unless you are alert to them.

2017-05-11T17:32:48+00:00 February 2, 2014|Decisions, Irrational Consistency, Perception, Security, Uncategorized|

How to go over budget by a billion dollars and other planning catastrophes

I was talking in the last post about base rate neglect. People have a tendency to seize on vivid particular features of a situation to the exclusion of general features. Journalists are particularly prone to this problem, because they have to find vivid details to sell newspapers.

Let’s take a (literally) concrete example: infrastructure spending.  Over $2 trillion has been added to extra infrastructure stimulus spending since the global crisis broke out in 2008, including massive fiscal injections in the US, China and India. So if there are widespread decision errors even as large as just 1% or 5% in this area, the consequences easily run into tens of billions of dollars.

In fact, infrastructure planners massively underestimate the cost of major projects, often by more than 50%. The average cost overrun on major rail projects is 44%, according to recent research:

The dataset .. shows cost overrun in 258 projects in 20 nations on five continents. All projects for which data were obtainable were included in the study. For rail, average cost overrun is 44.7 per cent measured in constant prices from the build decision. For bridges and tunnels, the equivalent figure is 33.8 per cent, and for roads 20.4 per cent. ..

  • nine out of 10 projects have cost overrun;
  • overrun is found across the 20 nations and five continents covered by the study;
  • overrun is constant for the 70-year period covered by the study; cost estimates have not improved over time.

And planners  hugely overestimate the benefits of major projects.

For rail, actual passenger traffic is 51.4 per cent lower than estimated traffic on average. This is equivalent to an average overestimate in rail passenger forecasts of no less than 105.6 per cent. ..

The following observations hold for traffic demand forecasts:

  • 84 per cent of rail passenger forecasts are wrong by more than ±20 per cent
  • nine out of 10 rail projects have overestimated traffic;
  •  50 per cent of road traffic forecasts are wrong by more than ±20 per cent;

The data is from a 2009 paper in the Oxford Review of Economic Policy by Bent Flyvberg: Survival of the unfittest: why the worst infrastructure gets built—and what we can do about it.

Just consider for a moment  what the data mean for the way policy decisions are taken. These are some of the biggest decisions politicians and planners make, and they are systemically biased and inaccurate.  Massive cost overruns are the norm.

The outcome is not just billions of dollars wasted, but, as Flyvbjerg argues, the worst projects with the most serious judgmental errors are the ones that tend to get built. This is pervasive in almost every country in the world.

Promoters overestimate benefits and underestimate costs in order to get approval. Of course they do, you might say. People are often overconfident and oversell things. But this does not get taken into account in the decisions. Recall again: “overrun is constant for the 70-year period covered by the study; cost estimates have not improved over time.”

What can be done about it? According to Flyvbjerg,

If project managers genuinely consider it important to get estimates of costs, benefits, and risks right, it is recommended they use a promising new method called ‘reference class forecasting’ to reduce inaccuracy and bias. This method was originally developed to compensate for the type of cognitive bias in human forecasting that Princeton psychologist Daniel Kahneman found in his Nobel prize-winning work on bias in economic forecasting (Kahneman, 1994; Kahneman and Tversky, 1979). Reference class forecasting has proven more accurate than conventional forecasting. It was used in project management in practice for the first time in 2004 (Flyvbjerg and COWI, 2004); in 2005 the method was officially  endorsed by the American Planning Association (2005); and since then it has been used by governments and private companies in the UK, the Netherlands, Denmark, Switzerland, Australia, and South Africa, among others.

I’ve talked about Kahneman’s outside view before.  As you can see, it’s just another way of saying: don’t ignore the base rate. Adjust your estimates with reference to the general  class of things you are looking at, not just the specific particulars of a situation. It’s seldom easy: another way to describe this is “the planning fallacy” and it applies to just about any major project (including starting a company!).

Here’s what to take away from this: this one particular blind spot is one of the most serious errors in public policy decision-making, with billions of dollars wasted every year.

But it’s not a matter of having to be pinpoint accurate about future states of the world, or getting a license for prophecy. Instead, it’s a matter of observing how people typically behave and what generally happens already. Observation matters more than speculation about the future.

We need less forecasting, and more recognition. That’s what Alucidate is set up to do.

2014-01-07T09:20:03+00:00 January 7, 2014|Base Rates, Decisions, Expertise, Outside View, Risk Management, Uncategorized|

The “What you see is all there is” fallacy

People have a tendency to jump to conclusions based on limited perspectives.

We often fail to allow for the possibility that evidence that should be critical to our judgment is missing—what we see is all there is. Furthermore, our associative system tends to settle on a coherent pattern of activation and suppresses doubt and ambiguity.

– Daniel Kahneman, Nobel Laureate in Economics 2002

But at least we can try to watch out for the traps, and look for the perspectives we are missing. The human mind is very good at finding spurious coherence for a wrong point of view. If you let it.

2013-11-22T12:21:50+00:00 February 15, 2013|Confirmation bias, Irrational Consistency, Perception, Uncategorized|

Credulity and Research

Zerohedge linked to this collection of Soc Gen pieces by Dylan Grice a few days ago. I’ve been skimming through it over the weekend. It is vastly more vivid and thoughtful than most bank research, even if I don’t agree with everything he says. It is historically informed, well-written and sometimes ingenious.

His approach is skeptical: predicting short-term market moves is almost impossible, he argues, even for those who think they are good at. He has learnt humility. The investment community knows less than it would like.  The smart money, including people like Buffett, knows the way to make money is not to bet the whole portfolio on one or two scenarios.

Instead, you need to look for the outlier risks that are foreseeable, among the many that are not. And then you look for insurance or ways to gain resilience against those risks you can do something about.

I think this makes sense. I’ve seen plenty of people come to grief in the last few years making huge bets on whether Europe was completely saved or completely damned, and get sideswiped by cycles of sentiment and timing. Big directional bets are extremely hard, even before trying to get the timing right.

 

Dylan Grice Full

 

2017-05-11T17:33:00+00:00 February 4, 2013|Uncategorized|