Home/Big Data

Big Data

How the most common prescription doesn’t cure bias

One of the main messages of most psychological research into bias, and the hundreds of  popular books that have followed,  is that most people just aren’t intuitively good at thinking statistically. We make mistakes about the influence of sample size, and about representativeness and likelihood. We fail to understand regression to the mean, and often make mistakes about causation.

However, suggesting statistical competence as a universal cure can lead to new sets of problems. An emphasis on statistical knowledge and tests can introduce its own blind spots, some of which can be devastating.

The discipline of psychology is itself going through a “crisis’ over reproducibility of results, as this Bloomberg view article from the other week discusses. One recent paper found that only 39 out of a sample of 100 psychological experiments could be replicated. That would be disastrous for the position of psychology as a science, as if results cannot be replicated by other teams their validity must be in doubt. The p-value test of statistical significance is overused as a marketing tool or way to get published. Naturally, there are some vigorous rebuttals in process.

It is, however, a problem for other disciplines as well, which suggests the issues are  genuine, deeper and more pervasive. John Ioannidis has been arguing the same about medical research for some time.

He’s what’s known as a meta-researcher, and he’s become one of the world’s foremost experts on the credibility of medical research. He and his team have shown, again and again, and in many different ways, that much of what biomedical researchers conclude in published studies—conclusions that doctors keep in mind when they prescribe antibiotics or blood-pressure medication, or when they advise us to consume more fiber or less meat, or when they recommend surgery for heart disease or back pain—is misleading, exaggerated, and often flat-out wrong. He charges that as much as 90 percent of the published medical information that doctors rely on is flawed.

The same applies to economics, where many of the most prominent academics apparently do not understand some of the statistical measures they use. A paper (admittedly from the 1990s) found that 70% of the empirical papers in the American Economic Review, the most prestigious journal in the field,” did not distinguish statistical significance from economic, policy, or scientific significance.”  The conclusion:

We would not assert that every econoimst misunderstands statistical significance, only that most do, and these some of the best economic scientists.

Of course, the problems and flaws in statistical models in the lead up to the great crash of 2008 are also multiple and by now famous. If bank management and traders do not understand the “black box” models they are using, and their limits, tears and pain are the usual result.

The takeaway is not to impugn statistics. It is that people are nonetheless very good at making a whole set of different mistakes when they tidy up one aspect of their approach. More statistical rigor can also mean more blind spots to other issues or considerations, and use of  technique in isolation from common sense.

The more technically proficient and rigorous you believe you are, often the more vulnerable you become to wishful thinking or blind spots. Technicians often have a remarkable ability to miss the forest for the trees, or twigs on the trees.

It also means there are (even more) grounds for mild skepticism about the value of many academic studies to practitioners.

2017-05-11T17:32:40+00:00 March 30, 2016|Big Data, Economics, Expertise, Psychology, Quants and Models|

The Difference between Puzzles and Mysteries

Companies and investment firms go extinct when they fail to understand the key problems they face. And right now, the fundamental nature of the problem many corporations and investors and policymakers face has changed. But mindsets have not caught up.

Ironically, current enthusiasms like big data can compound the problem. Big data,  as I’ve argued before, is highly valuable for tacking some kinds of problems, when you have very large amounts of data of essentially similar, replicable events. Simple algorithms and linear models also beat almost every expert in many situations, largely because they are more consistent.

The trouble is many of the most advanced problems are qualitatively different. Here’s an argument by Gregory Treverton, who argues there is a fundamental difference between ‘puzzles’ and ‘mysteries.’

There’s a reason millions of people try to solve crossword puzzles each day. Amid the well-ordered combat between a puzzler’s mind and the blank boxes waiting to be filled, there is satisfaction along with frustration. Even when you can’t find the right answer, you know it exists. Puzzles can be solved; they have answers.

But a mystery offers no such comfort. It poses a question that has no definitive answer because the answer is contingent; it depends on a future interaction of many factors, known and unknown. A mystery cannot be answered; it can only be framed, by identifying the critical factors and applying some sense of how they have interacted in the past and might interact in the future. A mystery is an attempt to define ambiguities.

Puzzles may be more satisfying, but the world increasingly offers us mysteries. Treating them as puzzles is like trying to solve the unsolvable — an impossible challenge. But approaching them as mysteries may make us more comfortable with the uncertainties of our age.

Here’s the interesting thing: Treverton is former Head of the Intelligence Policy Center at RAND, the fabled national security-oriented think tank based in Santa Monica, CA, and before that Vice Chair of the National Intelligence Council.  RAND was also arguably the richly funded original home of the movement to inject mathematical and quantitative rigor into economics and social science, as well as one of the citadels of actual “rocket science” and operations research after WW2.  RAND stands for hard-headed rigor, and equally hard-headed national security thinking.

So to find RAND arguing for the limits of “puzzle-solving” is a little like finding the Pope advocating Buddhism.

The intelligence community was focused on puzzles during the Cold War, Treverton says. But current challenges fall into the mystery category.

Puzzle-solving is frustrated by a lack of information. Given Washington’s need to find out how many warheads Moscow’s missiles carried, the United States spent billions of dollars on satellites and other data-collection systems. But puzzles are relatively stable. If a critical piece is missing one day, it usually remains valuable the next.

By contrast, mysteries often grow out of too much information. Until the 9/11 hijackers actually boarded their airplanes, their plan was a mystery, the clues to which were buried in too much “noise” — too many threat scenarios.

The same applies to financial market and business decisions. We have too much information. Attention and sensitivity to evidence are now the prime challenge facing many decision-makers. Indeed, that has always been the source of the biggest failures in national policy and intelligence.

It’s partly a consequence of the success of many analytical techniques and information gathering exercises. The easy puzzles, the ones susceptible to more information and linear models and algorithms ,  have been solved, or at least automated. That means it’s the mysteries and how you approach them that move the needle on performance.

 

2017-05-11T17:32:42+00:00 October 10, 2014|Assumptions, Big Data, Decisions, Lens Model, Security, Uncategorized|

Two Kinds of Error (part 3)

I’ve been talking about the difference between variable or random error,  and systemic or constant errors.  Another way to put this is the difference between precision and accuracy. As business measurement expert Douglas Hubbard explains in How to Measure Anything: Finding the Value of Intangibles in Business,

“Precision” refers to the reproducibility and conformity of measurements, while “accuracy” refers to how close a measurement is to its “true” value. .. To put it another way, precision is low random error, regardless of the amount of systemic error. Accuracy is low systemic error, regardless of the amount of random error. … I find that, in business, people often choose precision with unknown systematic error over a highly imprecise measurement with random error.

Systemic error is also, he says, another way of saying “bias”, especially expectancy bias – another term for confirmation bias, seeing what we want to see –  and selection bias – inadvertent non-randomness in samples.

Observers and subjects sometimes , consciously or not, see what they want. We are gullible and tend to be self-deluding.

That brings us back to the problems on which Alucidate sets its sights. Algorithms can eliminate most random or variable error, and bring much more consistency. But systemic error is then the main source of problems or differential performance. And businesses are usually knee-deep in it, partly because the approaches which reduce variable error often increase systemic error in practice. There’s often a trade-off between dealing with the two kinds of error, and that trade-off may need to be set differently in different environments.

I  like most of Hubbard’s book, which I’ll come back to another time. It falls into the practical, observational school of quantification rather than the math department approach, as Herbert Simon would put it.

But one thing he doesn’t focus on enough is learning ability and iteration – the ability to change your model over time.  If you shoot at the target and observe you hit slightly off center, you can adjust the next time you fire. Sensitivty to evidence and the ability to learn is the most important thing to watch in macro and market decision-making. In fact, the most interesting thing in the recent enthusiasm about big data is not the size of datasets or finding correlations. It’s the improved ability of computer algorithms to test and adjust models – Bayesian inversion. But that has limits and pitfalls as well.

“Strategies grow like weeds in a garden”. So do trades.

How much should you trust “gut feel” or “market instincts” when it comes to making decisions or trades or investments? How much should you make decisions through a rigorous, formal process using hard, quantified data instead? What can move the needle on performance?

In financial markets more mathematical approaches have been in the ascendant for the last twenty years, with older “gut feel” styles of trading increasingly left aside. Algorithms and linear models are much better at optimizing in specific situations than the most credentialed people are (as we’ve seen.) Since the 1940s business leaders have been content to have operational researchers (later known as quants) make decisions on things like inventory control or scheduling, or other well-defined problems.

But rigorous large-scale planning to make major decisions has generally turned out to be a disaster whenever it has been tried. It has generally been about as successful in large corporations as planning also turned out to be in the Soviet Union (for many of the same reasons). As one example, General Electric originated one of the main formal planning processes in the 1960s. The stock price then languished for a decade. One of the very first things Jack Welch did was to slash the planning process and planning staff.  Quantitative models (on the whole) performed extremely badly during the Great Financial Crisis. And hedge funds have increasing difficulty even matching market averages, let alone beating them.

What explains this? Why does careful modeling and rigor often work very well on the small scale, and catastrophically on large questions or longer runs of time? This obviously has massive application in financial markets as well, from understanding what “market instinct” is to seeing how central bank formal forecasting processes and risk management can fail.

Something has clearly been wrong with formalization. It may have worked wonders on the highly structured, repetitive tasks of the factory and clerical pool, but whatever that was got lost on its way to the executive suite.

I talked about Henry Mintzberg the other day. He pointed out that contrary to myth, most successful senior decision-makers are not rigorous or hyper-rational in planning, Quite the opposite. In the 1990s he wrote a book, The Rise and Fall of Strategic Planning, which tore into formal planning and strategic consulting (and where the quote above comes from.)

There were three huge problems, he said. First, planners assumed that analysis can provide synthesis or insight or creativity. Second, that hard quantitative data alone ought to be the heart of the planning process. Third, assuming the context for plans is stable, or predictable. All of them were just wrong. For example,

For data to be “hard” means that they can be documented unambiguously, which usually means that they have already been quantified. That way planners and managers can sit in their offices and be informed. No need to go out and meet the troops, or the customers, to find out how the products get bought or the wards get flight to what connects those strategies to that stock price; all that just wastes time.

The difficulty, he says, is that hard information is often limited in scope, “lacking richness and often failing to encompass important noneconomic and non-quantitiative factors.” Often hard information is too aggregated for effective use. It often arrives too late to be useful. And it is often surprisingly unreliable, concealing numerous biases and inaccuracies.

The hard data drive out the soft, while that holy ‘bottom line’ destroys people’s ability to think strategically. The Economist described this as “playing tennis by watching the scoreboard instead of the ball.” ..  Fed only abstractions, managers can construct nothing but hazy images, poorly focused snapshots that clarify nothing.

The performance of forecasting was also woeful, little better than the ancient Greek belief in the magic of the Delphic Oracle, and “done for superstitious reasons, and because of an obsession with control that becomes the illusion of control. ”

Of course, to create a new vision requires more than just soft data and commitment: it requires a mental capacity for synthesis, with imagination. Some managers simply lack these qualities – in our experience, often the very ones most inclined to rely on planning, as if the formal process will somehow make up for their own inadequacies. … Strategies grow initially like weeds in a garden: they are not cultivated like tomatoes in a hothouse.

Highly analytical approaches often suffered from “premature closure.”

.. the analyst tends to want to get on with the more structured step of evaluation alternatives and so tends to give scant attention to the less structured, more difficult, but generally more important step of diagnosing the issue and generating possible alternatives in the first place.

So what does strategy require?

We know that it must draw on all kinds of informational inputs, many of them non-quantifiable and accessible only to strategists who are connected to the details rather than detached from them. We know that the dynamics of the context have repeatedly defied any efforts to force the process into a predetermined schedule or onto a predetermined track. Strategies inevitably exhibit some emergent qualities, and even when largely deliberate, often appear less formally planned than informally visionary. And learning, in the form of fits and starts as well as discoveries based on serendipitous events and the recognition of unexcited patterns, inevitably plays a role, if not the key role in the development of all strategies that are novel. Accordingly, we know that the process requires insight, creativity and synthesis, the very thing that formalization discourages.

[my bold]

If all this is true (and there is plenty of evidence to back it up), what does it mean for formal analytic processes? How can it be reconciled with the claims of Meehl and Kahneman that statistical models hugely outperform human experts? I’ll look at that next.

How big data still needs common sense

It turns out that one of the major advertised achievements of big data, Google Flu Trends (GFT), doesn't work, according to new research. Google claimed it could detect flu outbreaks with unparalleled real-time accuracy by monitoring related search terms. In fact, the numbers were 50% or more off.

Just because companies like Google can amass an astounding amount of information about the world doesn’t mean they’re always capable of processing that information to produce an accurate picture of what’s going on—especially if turns out they’re gathering the wrong information. Not only did the search terms picked by GFT often not reflect incidences of actual illness—thus repeatedly overestimating just how sick the American public was—it also completely missed unexpected events like the nonseasonal 2009 H1N1-A flu pandemic.

If you wanted to project current flu prevalence, you would have done much better basing your models off of 3-week-old data on cases from the CDC than you would have been using GFT’s sophisticated big data methods.

One additional problem is the actual Google methods (including the search terms and the underlying algorithm) are opaque, proprietary and difficult to replicate. That also makes it much harder for outside scientists to work out what went wrong, or improve the techniques over time.

It doesn't mean there isn't serious value in big data. But it is often overhyped and overgeneralized for commercial reasons by big tech, like Google and IBM and Facebook.

Most of the value isn't the bigness or the statistical sophistication, but the fact it is often new data, observations that we did not have before. I was at a conference last year where some data researchers talked about how they had improved ambulance response times in New York. They had a look at GPS data on where ambulances waited, and could move some of them closer to likely cases.

It is marvellous. But the key element, of course, is the fact that GPS receivers have become so cheap and omnipresent that we can put them in ambulances and smartphones. In essence, it's the same value as Galileo turning his telescope to the skies for the first time and observing the moons of Jupiter more accurately. It's something we've been doing for five hundred years: using new instruments to get better observations.

You still need to sift and weigh the evidence, and you still have the usual problems and serious risks involved with that. You need to be aware of assumptions, and ask the right questions, and look in the right places, and avoid cherry-picking data which confirms what you already think.

Big data techniques work well for some kinds of problems. There is genuine innovative value in new Bayesian inference techniques in particular.

But can also lead to some specific kinds of carelessness and blind spots, and misapplication to wrong problems. Overemphasizing correlation is one of them. Financial market players have repeatedly had to find the limits of similar quantitative techniques the hard way. Just ask Long Term Capital or the mortgage finance industry.

And most interesting commercial and social problems are not very large aggregations of homogenous data, but smaller dynamic systems, which need very different techniques. There are some very important issues here, and specific kinds of mistakes which I will return to.

2017-05-11T17:32:45+00:00 March 14, 2014|Big Data, Quants and Models|

How “decision science” often fails

Most of the value in the economy now comes not from making things, but from making decisions. Most managers , professionals and policymakers are in essence paid to exercise judgment and choose the best option in ambiguous situations.

It's also very clear that decisions by smart, trained, able people often turn out extremely badly. Corporations go bankrupt. Professionals miss or ignore debacles like Enron, Lehman and Madoff. Policymakers loaded down with advanced credentials manage to produce the worst financial crisis since the Great Depression, or spend a trillion dollars on a futile war to reshape the Middle East.

Some of that ocean of error is because of bad luck. Some of it is because of complexity. Some of it is an understandable failure to develop a gift for prophecy.

But most of it is because people simply don't notice contrary information or test assumptions. They misperceive, misunderstand, and fail to communicate. They fail to learn.

So how do we do better? There could hardly be a more important question. A necessary starting point is the research on decision-making.

One problem is there was surprisingly little attention to how people make decisions in practice for many decades. In fact, there was little attention or research into decision-making at all, at least outside military strategy, until WW2. Companies thought about administration and organizational charts, rather than decisions themselves.

Then two major developments transformed that situation. One was personal experience, and the other a major theoretical breakthrough.

The experience was the huge success of planning and mathematical approaches during World War 2. Many of the leading theorists and decision-makers who dominated thinking in the 1950s and 1960s worked in logistics or military planning during the war, especially units devoted to the new discipline of “Operational Research.”

Many officers also came back from the war convinced that efficient planning and organizational hierarchy had produced victory. That conviction carried over into corporate America in the postwar world.

The theoretical breakthrough was Von Neumann and Morgenstern's formalization of expected utility theory in The Theory of Games and Economic Behavior in 1947. They produced a consistent framework based on seven axioms. Around the same time Paul Samuelson was formalizing much of economics. So economists had a powerful theory which specified how decisions ought to be made.If you've been trained in economics, you probably have had little exposure to any way of analyzing decisions other than expected utility.

This microeconomic approach also developed into the sub-discipline which calls itself “decision science”. Researchers developed increasingly complex multi-attribute utility analysis models for use in corporations, government and the military. Immense resources of time and money were devoted to translating problems and processing them in the computer models.

The rational choice approach reached its maximum scholarly and cultural dominance in the 1960s. People believed that the major challenges of society would quickly yield to expert analysis, from the Vietnam War to the Great Society.

The 1960s hyper-rational Harvard Business School approach also produced the culture and thinking of the major consulting firms like McKinsey, whose business was based on the idea a super-smart and rational MBA graduate could solve business problems in a few weeks that had defeated dumber CEOs with forty years' practical experience. Economics and Finance are still centered around the expected utility model.

But the 1960s also produced major studies which showed that people didn't make decisions in a rational-choice way, they couldn't make decisions that way, and if they tried it often produced worse outcomes. Planning fell from favor. (I'll come back to this.)

In fact, there is no evidence that the elaborate expected utility approach produces better decisions. One of the primary books in the rational choice tradition is Reid Hastie and Robyn Dawes' Rational Choice in an Uncertain World: The Psychology of Judgment and Decision Making (which is an excellent book.) They are disturbed by research (most specifically in 2006 ) which apparently shows unconscious or intuitive judgment outperforms formal rational choice.

These results are troubling to people, like the authors of this book, who believe that there are important advantages to systematic, controlled, deliberate and relatively conscious choice strategies. .. In fact, we cannot cite any research-based proofs that deliberate choice habits in practical affairs are better than going with your gut intuitions. (p232).

This most specifically refers to the recent “adaptive” or “fast and frugal” alternative decision-making research. But there are limits to the applicability of “gut feelings” and intuition as well, which have to do with the limits of expertise and the relative advantages of “System 1” and “System 2” thinking.

Ever since the 1960s, different disciplines keep rediscovering that people do not make decisions in a rational-choice, expected-utility way in practice, often for good reasons. But much of economics, finance and risk management still either assumes they do, or they ought to.

 

 

2017-05-11T17:32:48+00:00 March 6, 2014|Big Data, Decisions, Uncategorized|

Information is no longer an advantage

Information has become either commoditized, or potential bait for insider trading charges. That’s the reality the Street is coming to grips with.

This article on CNBC argues the SAC indictment is a sign “human- driven information advantages” are in deep trouble.

 

In pursuing and winning an indictment of SAC Capital Advisors LP, federal prosecutors are intent on shuttering one of the largest and most successful stock-trading hedge-fund firms in Wall Street history.

If they succeed, it will also mark the effective demise of a whole mode of hedge fund investing, that which hunts for returns based on fleeting, human-driven information advantages – legal or, allegedly, not – from analysts, industry executives and brokerage-house trading contacts

Of course, this is a sign markets are working, in the broader sense. If you have a profitable edge, it attracts competition, which eliminates your advantage.

That applies even more when the distribution costs of information have plunged and technology has changed the environment almost beyond recognition. The Internet has melted barriers to entry.

Strong information niches used to exist. A few (like seeing client order flow) will continue to linger. Selling aggregate personal information is lucrative, at least for now. Material nonpublic information will still be temporarily valuable to those who like the hospitality in federal prisons.

But most durable information niches are gone like the buggy whip, or so commoditized there is no profit or advantage.

Competitive niches are always disappearing, and with them the firms that relied on them. “Information wants to be free,” say the tech people.

The “better information” niche in markets, surprise surprise, is disappearing too. Finance is not exempt from those trends.

Twenty years ago the Fed was barely announcing that it had decided to change fed funds at all. Now it puts out fifteen page PDFs of the minutes of its discussions and announces its policy through 2015 and beyond. Twenty years ago, if you wanted expert analysis of this morning’s economic data, you needed to have a rich relationship with an investment bank. Now you can read instant reaction and expert analysis on a dozen free blogs.

Does that mean, as the article implies, the only source of investment advantage in the future will be search bots roaming Twitter feeds?

No. That niche is almost impossible to sustain, too. Algorithms are extremely useful for certain kinds of problems. The problem is they are much more easily replicated than human judgment.

This is the major problem with the ‘big data” idea; the assumption that you will be able to sustain a durable advantage over the thousand other firms with Russian physics PHDs with copies of Mathematica and R doing Markov Chain Monte Carlo. If a hundred thousand bots programed by a thousand quants are scanning data, maybe a few will have a technical advantage for weeks or months. But most are going to have no advantage over other similar algorithms. The niche will rapidly get competed away. Efficiency will eliminate returns. 990 of the thousand quants will be in trouble, especially when they have to explain to their investors the nature of their distinctive advantage.

High tech products typically get commoditized at an especially frightening pace – ask semiconductor or hard disk or plasma tv manufacturers. Today’s smartphone is tomorrow’s doorstop.

So what can you do? Information is generally only useful as an input into decisions. Decisions are where the value lies, because information is usually inherently ambiguous and contradictory. There is a wide “moat” around genuine skill at judgment, because it is so rare.

People find it very difficult to make good decisions, as any look at the research on decision traps or expert prediction will tell you. But that only means there’s a lot of upside opportunity for improvements.

Interestingly, algorithms – the simpler the better – are much better at combining information together than human experts, as we’ve known for fifty years since Paul Meehl discovered simple linear models outperformed physicians at medical judgment. (Yes, computers were outperforming experts as far back as 1954. it’s not new.) But some people have an edge on recognition and cues and identifying assumptions and defining the problem. I’ll come back to this another time.

So that’s the future. Information wants to be commoditized. But small, marginal improvements in the quality of decisions are less replicable and more sustainable than algorithmic niches. If you’re a human trader, this is what will keep you in business. (And it’s our business).

 

 

 

2017-05-11T17:32:52+00:00 July 30, 2013|Big Data, Decisions, Industry Trends, Market Behavior|

The Tylenol Test

There is a certain cast of mind that finds it very difficult to grasp or accept that something may be a very good idea up to a point – but not beyond it.

Some people yearn for universal rules. If the tools of economics or quantitative methods or Marxism or sociology or Kantian ethics or government regulation have any usefulness, they believe, they ought to apply to everything, always. If there is a problem, the answer is always even more of the same.

But common sense says that’s not actually true. Two Tylenol pills will cure a headache. That does not mean taking forty pills at once will be even better.

Call that the Tylenol Test. What is good in small doses can kill you in excess.

You’ll see I often talk about the limits of things here, whether it’s big data or academic economics or the impact of monetary policy. That does not mean those things – and others like them – are not useful, or valuable, or sometimes essential. I support all of them extremely strongly in the right circumstances. They can clear confusion and cure serious headaches.

It’s not a binary good/bad distinction.

Instead, it’s the right medicine at the right time in the right amount that keeps you healthy and makes you ready to seize opportunities. (That echoes the ancient Aristotelian idea of the golden mean, of course.)

If you overdose, you’re in trouble. Maybe that’s a universal rule….

The most famous Excel error in the world

It isn’t often a storm on economic blogs spills over into mainstream debate, but the Reinhart-Rogoff controversy I tweeted about last week has become very colorful.

The story is this. The two prominent Harvard economists studied hundreds of historical examples, and suggested that once public debt reaches 90% of GDP, median growth rates fall 1%. Stripped of jargon, they argue that at some point if you have too much debt it will harm growth.

It now turns out that there was an Excel error in their calculations which made the result less powerful. Because the politics of austerity and government debt are so hot, this has rapidly become one of the most famous spreadsheet mistakes in history.  The graduate student who discovered the error is being credited with “shaking the global austerity movement.”

But the controversy is losing the point.

First, the 90% threshold isn’t the most important Reinhart-Rogoff finding. I was completely uninterested in it personally, because I found another of their findings so immediately useful in discussing the evolution of the economy in recent years. In their book This Time Is Different: Eight Centuries of Financial Folly, Reinhart and Rogoff argued that after a major financial crisis, it can take six to seven years, or even longer, for the economy to get back on its feet. That is very different to the normal recovery in postwar years which kicks in after a few quarters, and if true can dramatically transform how policymakers should respond to the recession.

It was particularly important when the Fed was talking about exit strategy and how it would raise interest rates in 2009, less than one year after the crisis. I got some particularly long pauses when I brought up this point about seven dry years  with very senior officials. One even told me that it might not apply this time because policy would be much better than in those other instances. 

So that Reinhart-Rogoff result was fundamental in trying to perceive the key features of the situation. The claim about the potential length of a recession was the single most useful piece of economic research I read at the time, and has been a very good pointer to the evolution of the economy.

The value of getting your hands dirty

Second, I was uninterested in the 90% result because I just don’t believe in sharp thresholds.  Instead, what is interesting and admirable in Reinhart-Rogoff is not the spreadsheet, but the fact that they have attempted to grapple more directly with history and seek out common threads. I have much more respect for economic history than macroeconomic modeling, because history is more likely to force you to confront actual situations and data in a concrete way. Insofar as Ben Bernanke has been successful, it’s been because he knows the history of the Great Depression so thoroughly, as one of his central academic areas of knowledge.

Macroeconomic modellers tend to dismiss history as mere anecdote. However, it forces you to confront something outside your own head, not cherry-pick a few general features that “explain” things.  Many economists are just lazy and rely on the few main government statistical series and regressions. They are not interested in actual policy dilemmas or the origjn of the data, but in the elegance of the mathematics.    Reinhart and Rogoff are a major exception, however. They have crawled through the details of former Belgian bank crashes or 19th century depressions. They have added useful data to the debate. Studying complex, messy history is more likely to force you to test your preconceptions.

Modern economic history is still a small sample, however. When big data works well, it can examine events that can be precisely replicated millions of times a day, such as google searches or credit card transactions. But there are simply not enough major crises or depressions in modern economic history, nor enough data about them to establish any kind of iron law that would enable anyone to claim 90% debt/GDP is a kind of cliff for growth. So it should not be seen as a prediction, so much as a potential risk to watch out for.

The Murky Austerity Debate

And that leads to the risks of austerity. The whole “austerity” debate is mostly people talking at cross-purposes. It is mostly a difference in frames and focus. One side believes it is essential to boost aggregate demand, and hence growth, as much as possible in the short run. They focus on the damage caused by unemployment, and in particular on the risk that there could be long-lasting, persistent effect on both individuals and the labor market as a whole. Rising government spending is a small price to pay for stopping this damage, and once growth resumes the debt will be steadily less of a problem.  Paul Krugman and, in a less shrill way, Janet Yellen, the Vice-Chair of the Federal Reserve would be examples of people who think this way.

The other side looks at rising government debt levels and fears that they will lead to permanently higher taxes, “crowded-out” private investment, and, most importantly, the risk of a catastrophic bond market crisis if markets lose confidence in buying the debt. They focus on the need to avoid future crises, rather than the current problem of insufficient demand or unemployment. This view is more popular on the right, which is suspicious of government spending in the first place.  The 90% controversy only relates to a small part of this.

There is no question that high levels of debt makes you more vulnerable if something goes wrong. Ask anyone who has lost their home to foreclosure: debt is often dangerous.

Debt means you lose flexibility. Already, if there is another downturn at some point, there is very little scope for governments to launch massive new stimulus spending initiatives because debt levels are already so high and legislative patience has worn thin. Lenders are often willing to lend you too much when you don’t need it, and nothing at all when you do. Interest rates can rise just when you least afford it.  If you lever up with debt, you can get stellar returns when things are going well, but small dips can wipe out your equity and make you bankrupt.  Ask many hedge funds, or banks, or home owners. You become much more exposed to small dips or surprises.

So what should we do?

Think of it this way. Your house has sprung a leak in the roof. You’d be crazy not to immediately spend money to repair it , before the water rots the rafters and you have to ultimately spend far more to fix a much larger problem. But that argument does not mean you should take out a second mortgage and load up all your credit cards and buy a new car as well. If you don’t watch your overall debt, next time you need to repair the roof, the credit card will be maxed out, you may find your income will be sucked away by interest payments, and you’re reduced to living on beans and rice while the rain pours into your kitchen.

The austerity debate has become completely stuck, particularly in Europe.  It is not a binary question. The clear answer is to spend whatever money is necessary right now to fix the roof. Short-term cuts are a mistake when the economy is weak and fragile. But you ALSO have to make sure you don’t leave yourself vulnerable to losing your house altogether to foreclosure in the medium term because of reckless spending. And that means being prepared to cut back on popular medium-term objectives, most obviously the welfare and entitlement spending which dominate budgets.

There is a trade-off. Higher stimulus spending now, lower pensions in the future before the market loses confidence.

So this current Reinhart-Rogoff storm is largely about trying to evade a more sensible debate about the trade-off between shoring up the economy in the short-run and making sure we do not get an even worse crisis in the medium-run. That’s much more important than a spreadsheet error.

Krugman in particular is notorious for dismissing worries about government debt. He is certainly right that US Treasury yields are still near historic lows, despite many predicitons that “bond market vigilantes” would take revenge on excess government spending. But the example of Italy, Spain, and Greece shows that bond yields can rise substantially over short periods of time.

Krugman is a little like the famous metaphor of the man who jumps off the top of the Empire State Building, and as he passes the thirtieth floor shouts to a friend “those fools told me this was dangerous, but look, I’m fine!”

Soros and patterns of error

David Brooks talks here about the limits of big data. Big data looks for correlation, rather than causation. The trouble is people are prone to “gigantic and unpredictable changes in taste and behavior”, and those are hard to anticipate in short runs of recent data.

Even more importantly,

Another limit is that the world is error-prone and dynamic. I recently interviewed George Soros about his financial decision-making. While big data looks for patterns of preferences, Soros often looks for patterns of error. People will misinterpret reality, and those misinterpretations will sometimes create a self-reinforcing feedback loop. Housing prices skyrocket to unsustainable levels.

If you are relying just on data, you will have a tendency to trust preferences and anticipate a continuation of what is happening right now. Soros makes money by exploiting other people’s misinterpretations and anticipating when they will become unsustainable.

Of course Soros has been talking of “reflexivity” and peope's beliefs for twenty years, since The Alchemy of Finance .

This is one reason why Alucidate is so focused on perception. People misinterpret reality, and they do do in patterns.

 

2017-05-11T17:32:57+00:00 April 16, 2013|Big Data, Investment, Market Behavior, Perception|