Home/Foxes and Hedgehogs

Foxes and Hedgehogs

Looking for leverage instead of predictions

I’ve been throwing stones at prediction in the last few posts. Here’s another angle.  I’ve talked about the distinction between hedgehogs and foxes before. These differences in mindset go very deep and surface in all kinds of ways. Another way to put it is between a kind of academic analytic approach to decisions, and a practitioner’s or leader’s approach.

As Richard Rumelt puts it in his book Good Strategy Bad Strategy: The Difference and Why It Matters; –

Whereas a social scientist seeks a diagnosis that best predicts outcomes, good strategy tends to be based on the diagnosis providing leverage over outcomes.

Strategy is about doing something, he says,  not passively predicting or forecasting things. That means if you can’t take action, or shape the situation, you are very vulnerable. And the first thing to do is usually have a plan B.

True enough, a failed prediction may show up a wrong assumption or mistake. But by that point its usually too late. Practitioners have to survive another day, not develop parsimonious explanation and general theory that is true for all time.

That means you have to think about and test your assumptions before a big failure.  But most people find it extremely hard to see their assumptions, let alone test them and adjust their view.

In theory people should learn from their mistakes and failed predictions. In practice they most often don’t. It’s an anomaly. Or an exception. Or they knew all along anyway (i.e. hindsight bias).  The reality is people resist changing their minds, predictions or no predictions. They get too entrenched in a view. That’s the fundamental problem that needs solving.

2015-05-12T18:04:17+00:00 May 12, 2015|Adaptation, Assumptions, Decisions, Foxes and Hedgehogs|

Look for the balances

One of the most intractable policy problems is  people have a strong tendency to ignore or deny any downside  or side-effects in their actions. Instead, they prefer talk in terms of unitary goals, universal rights, and clear, consistent principles. Much of the educational training of elites, especially economists,  inclines them to deal with generalizations.  Hedgehogs are temperamentally inclined to look for single overarching explanations.

In actual practice, however,

The need to adapt to particular circumstances … runs counter to our tendency to generalize and form abstract plans of action.

Dörner is the brilliant German psychologist I mentioned last week, who studies the “logic of failure” and the typical patterns of errors people make in decisions.  Many of the essentials of better decision-making come down to maintaining threading your way between two opposite errors, and so maintaing a balance.

One problem is ignoring trade-offs and incompatibilities between goals:

Contradictroy goals are the rule, not the exception, in complex situations.

It is also difficult to judge the right amount of information-gathering:

We combat our uncertainty either by acting hastily on the basis of minimal information, or by gathering excessive information , which inhibits action and may even increase our uncertainty. Which of these patterns we follow depends on time pressure, or the lack of it.

It  is  also difficult to judge the right balance of specific versus general considerations: i.e how unique a situation is.  Experts typically overestimate the unique factors (“this time it’s different”) at the expense of  the general class of outcomes, or what Kahneman calls the “outside view.”  (“what usually happens in these situations?”) The necessities of political rhetoric and motivating people to take action also often requires politicians to ignore or downplay conflicting goals.

So in any complicated situation, judgment consists of striking many balances. One of the most useful ways to look for blind spots is, therefore, to look for the hidden or ignored balances, and trade-offs, and conflicts. And one sign of that is, according to Dorner, that good decision-makers and problem solves tend to use qualified language: “sometimes”  versus “every time”, “frequently” instead of “constantly” or “only.”

Everything at its proper time and with proper attention to existing conditions. There is no universally applicable rule, no magic wand, that we can apply to every situation and to all the structures we find in the real world. Our job is to think of, and then do, the right things at the right times in the right way. There may be rules for accomplishing this, but the rules are local – they are to a large extent dictated by specific circumstances. And that mens in turn that there are a great many rules.(p192)

And that means in turn that any competent or effective policy institution, like a central bank, cannot easily describe its reaction function in terms of clear consistent principles or rules. When it tries, markets and audiences are going to be confused and disappointed.

 

2017-05-11T17:32:43+00:00 September 11, 2014|Base Rates, Expertise, Foxes and Hedgehogs|

System Blindness

Good news: GDP grew at 4% and the winter surprise has faded. As usual, there is endless analysis available for free. These days we swim in a bottomless ocean of economic commentary.

Let’s turn to something that might give people an edge in making decisions instead.  One of the main reasons people and companies get into trouble is that they don’t think in terms of systems. I noted one major source of this approach was Jay Forrester’s work at MIT beginning in the 1960s. His successor at MIT is John Sterman, who calls it system blindness.

Sterman documents the multiple problems that decision-makers  have  dealing with dynamic complexity in his best-known shorter paper. We haven’t evolved to deal with complex systems, he says. Instead, we are much quicker to deal with things with obvious, direct, immediate, local causes. (See bear. Run.)

So people have inherent deep difficulties with feedback loops, for example.

Like organisms, social systems contain intricate networks of feedback processes, both self-reinforcing (positive) and self-correcting (negative) loops. However, studies show that people recognize few feedbacks; rather, people usually think in short, causal chains, tend to assume each effect has a single cause, and often cease their search for explanations when the first sufficient cause is found. Failure to focus on feedback in policy design has critical consequences.

As a result, policies and decisions often become actively counterproductive, producing unexpected side-effects and counter-reactions. Such ‘policy resistance’ means major decisions frequently have the opposite effect to that intended (such as building major roads but producing even more congestion, or suppressing forest fires and producing much bigger blazes.)

People also have serious problems understanding time and delays, which often leads to oversteer at the wrong times and wild oscillation and swings.  They have difficulty with short-term actions that produce long-term effects. They assume causes must be proportionate to effects. (Think of the long and variable lags in monetary policy, and the tendency to oversteer.)

Decision-makers have problems with  stocks and flows. In essence, a stock is the water in the bath. A flow is the water running from the tap.

People have poor intuitive understanding of the process of accumulation. Most people assume that system inputs and outputs are correlated (e.g., the higher the federal budget deficit, the greater the national debt will be). However, stocks integrate (accumulate) their net inflows. A stock rises even as its net inflow falls, as long as the net inflow is positive: the national debt rises even as the deficit falls—debt falls only when the government runs a surplus; the number of people living with HIV continues to rise even as incidence falls—prevalence falls only when infection falls below mortality. Poor understanding of accumulation has significant consequences for public health and economic welfare.

People also fail to learn from experience, especially in groups. They don’t test beliefs. Instead, they see what they believe, and believe what they see. They use defensive routines to save face. They avoid testing their beliefs, especially in public.

Note that these are not the problems that are getting prime attention in behavioral economics., let alone mainstream economics. Why don’t system ideas get more attention? Sterman notes that more generally, people often fail to learn from hard evidence.

More than 2 and one-half centuries passed from the first demonstration that citrus fruits prevent scurvy until citrus use was mandated in the British merchant marine, despite the importance of the problem and unambiguous evidence supplied by controlled experiments.

For me, one additional major reason might be we are generally so used to the analytic approach: i.e.  break things down into their component parts and examine each separately. This has worked extremely well for decades in science and business, when applied to things which don’t change and adapt all the time. Instead, systems thinking is about looking at the interaction between elements. It is synthesis, “joining the dots”, putting the pieces together and seeing how they work and interrelate in practice.

And that might be an additional explanation for the hedgehog versus fox distinction. You recall the fundamentally important research that finds that foxes, “who know many things”, outperform hedgehogs “who know one big thing”  at prediction and decision. Hedgehogs are drawn more to analysis and universal explanation; foxes are drawn more to synthesis and observation.

As a result, hedgehogs have much greater difficulty with system thinking. Foxes are more likely to recognize and deal with system effects. If you confront a complex adaptive system (like the economy or financial markets,) that gives foxes an edge.

 

 

Two Kinds of Error (part 4)

It’s sometimes very satisfying to see deeper linkages and symmetries in how people approach issues. It can also be highly useful.  I’ve been talking about a string of related oppositions, which circle around some deeper features of how people make mistakes. You can look for variable error versus systemic error, precision versus accuracy, fragile versus antifragile, or (less appropriately) analytic versus intuitive. People often reinvent the same distinction using different names in different disciplines.

And if you think about it for a moment, this is also maps onto the deeper distinction between foxes and hedgehogs that I’ve talked so much about,. Hedgehogs are attuned to variable error and consistency. Foxes are attuned to systemic error and change.

Philip Tetlock explains the difference between foxes and hedgehogs in a different way in his famous book Expert Political Judgment: How Good Is It? How Can We Know?.  He largely relates it to a motivated  need for cognitive closure. He cites earlier work by Kruglanski and Webster  suggesting people differ in their tolerance for ambiguity and need to achieve resolution. This makes the fox:hedgehog distinction largely a matter of personality: hedgehogs “seize” on a view and then “freeze” it. Says Tetlock (in a footnote on p75)

High need-for-closure, integratively simple individuals are like Berlin’s hedgehogs: they dislike ambiguity and dissonance in their personal and political lives, place a premium on parsimony, and prefer speedy resolutions of uncertainty that keep prior opinions intact.  Low need-for-closure, integratively complex individuals are like Berlin’s foxes: they are tolerant of ambiguity and dissonance, curious about other people’s points of view, and open to the possibility they are wrong.

But it is more than  a personality difference or appetite for ambiguity. It reflects fundamental differences in how you deal with two inherently different kinds of error which objectively exist regardless of personality differences. Success depends on what kind of errors are most likely in a particular environment, and the costs of being wrong. Careful, methodical hedgehogs will often tend to be better in static environments, because they will steadily eliminate the variable error. Foxes will do better in dynamic, changeable environments, because they are more likely to recognize change and reduce systemic error.

Indeed, Tetlock argues this point about  the relevance of environment towards the end of his book. There is a trade-off, he says, between theory-driven(hedgehog) and imagination-driven (fox) thinking (p214).

Hedgehogs put more faith in theory-driven judgments and keep their imaginations on tighter leashes than do foxes. Foxes are more inclined to entertain dissonant scenarios that undercut their own beliefs and preferences. .. Foxes were better equipped to survive in rapidly changing environments in which those who abandoned bad ideas quickly held the advantage. Hedgehogs were better equipped to survive in static environments that rewarded persisting with tried-and-true formulas.

It is not either-or. Most people, most of the time tend to the hedgehog end of the scale, he says. Everyone is reluctant to confront disagreeable evidence that counts against our views, but hedgehogs are “less apologetic” about doing so.

What does this mean? There are patterns in how people make mistakes. One of them is if you observe hedgehog-type thinking in practice, you can anticipate certain kinds of errors and blindspots, especially lack of attention to dynamics and boundary conditions.  And hedgehogs are omnipresent making decisions in business, finance and policymaking.

It also means the more volatility and structural change and disturbance in the environment, the more mistakes the hedgehogs will make.  Spotting such current observable patterns of error is more practical and tractable than making prophecies and forecasts about the future.

2017-05-11T17:32:44+00:00 June 9, 2014|Adaptation, Foxes and Hedgehogs|

Two Kinds of Error (part 3)

I’ve been talking about the difference between variable or random error,  and systemic or constant errors.  Another way to put this is the difference between precision and accuracy. As business measurement expert Douglas Hubbard explains in How to Measure Anything: Finding the Value of Intangibles in Business,

“Precision” refers to the reproducibility and conformity of measurements, while “accuracy” refers to how close a measurement is to its “true” value. .. To put it another way, precision is low random error, regardless of the amount of systemic error. Accuracy is low systemic error, regardless of the amount of random error. … I find that, in business, people often choose precision with unknown systematic error over a highly imprecise measurement with random error.

Systemic error is also, he says, another way of saying “bias”, especially expectancy bias – another term for confirmation bias, seeing what we want to see –  and selection bias – inadvertent non-randomness in samples.

Observers and subjects sometimes , consciously or not, see what they want. We are gullible and tend to be self-deluding.

That brings us back to the problems on which Alucidate sets its sights. Algorithms can eliminate most random or variable error, and bring much more consistency. But systemic error is then the main source of problems or differential performance. And businesses are usually knee-deep in it, partly because the approaches which reduce variable error often increase systemic error in practice. There’s often a trade-off between dealing with the two kinds of error, and that trade-off may need to be set differently in different environments.

I  like most of Hubbard’s book, which I’ll come back to another time. It falls into the practical, observational school of quantification rather than the math department approach, as Herbert Simon would put it.

But one thing he doesn’t focus on enough is learning ability and iteration – the ability to change your model over time.  If you shoot at the target and observe you hit slightly off center, you can adjust the next time you fire. Sensitivty to evidence and the ability to learn is the most important thing to watch in macro and market decision-making. In fact, the most interesting thing in the recent enthusiasm about big data is not the size of datasets or finding correlations. It’s the improved ability of computer algorithms to test and adjust models – Bayesian inversion. But that has limits and pitfalls as well.

Two kinds of error (part 1)

How do we explain why rigorous, formal processes can be very successful in some cases, and disastrous in others? I was asking this in reference to Henry Mintzberg’s research on the disastrous performance of formal planning. Mintzberg cites earlier research on different kinds of errors in this chart (from Mintzberg, 1994, p327).

 
Mintzberg Diagram

 

.. the analytic approach to problem solving produced the precise answer more often, but its distribution of errors was quite large. Intuition, in contrast, was less frequently precise but more consistently close. In other words, informally, people get certain kinds of problems more or less right, while formally , their errors, however infrequent, can be bizarre.

This is important, because it lies underneath a similar distinction that can be found in many other places. And because the field of decision-making research is so fragmented, the similar answers usually stand alone and isolated.

Consider, for example, how this relates to  Nicholas Nassim Taleb’s distinction between Fragile and Antifragile approaches and trading strategies. Think of exposure, he says, and the size and risk of the errors you may make.

A lot depends on whether you want to rigorously eliminate small errors, or watch out for really big errors.

“Strategies grow like weeds in a garden”. So do trades.

How much should you trust “gut feel” or “market instincts” when it comes to making decisions or trades or investments? How much should you make decisions through a rigorous, formal process using hard, quantified data instead? What can move the needle on performance?

In financial markets more mathematical approaches have been in the ascendant for the last twenty years, with older “gut feel” styles of trading increasingly left aside. Algorithms and linear models are much better at optimizing in specific situations than the most credentialed people are (as we’ve seen.) Since the 1940s business leaders have been content to have operational researchers (later known as quants) make decisions on things like inventory control or scheduling, or other well-defined problems.

But rigorous large-scale planning to make major decisions has generally turned out to be a disaster whenever it has been tried. It has generally been about as successful in large corporations as planning also turned out to be in the Soviet Union (for many of the same reasons). As one example, General Electric originated one of the main formal planning processes in the 1960s. The stock price then languished for a decade. One of the very first things Jack Welch did was to slash the planning process and planning staff.  Quantitative models (on the whole) performed extremely badly during the Great Financial Crisis. And hedge funds have increasing difficulty even matching market averages, let alone beating them.

What explains this? Why does careful modeling and rigor often work very well on the small scale, and catastrophically on large questions or longer runs of time? This obviously has massive application in financial markets as well, from understanding what “market instinct” is to seeing how central bank formal forecasting processes and risk management can fail.

Something has clearly been wrong with formalization. It may have worked wonders on the highly structured, repetitive tasks of the factory and clerical pool, but whatever that was got lost on its way to the executive suite.

I talked about Henry Mintzberg the other day. He pointed out that contrary to myth, most successful senior decision-makers are not rigorous or hyper-rational in planning, Quite the opposite. In the 1990s he wrote a book, The Rise and Fall of Strategic Planning, which tore into formal planning and strategic consulting (and where the quote above comes from.)

There were three huge problems, he said. First, planners assumed that analysis can provide synthesis or insight or creativity. Second, that hard quantitative data alone ought to be the heart of the planning process. Third, assuming the context for plans is stable, or predictable. All of them were just wrong. For example,

For data to be “hard” means that they can be documented unambiguously, which usually means that they have already been quantified. That way planners and managers can sit in their offices and be informed. No need to go out and meet the troops, or the customers, to find out how the products get bought or the wards get flight to what connects those strategies to that stock price; all that just wastes time.

The difficulty, he says, is that hard information is often limited in scope, “lacking richness and often failing to encompass important noneconomic and non-quantitiative factors.” Often hard information is too aggregated for effective use. It often arrives too late to be useful. And it is often surprisingly unreliable, concealing numerous biases and inaccuracies.

The hard data drive out the soft, while that holy ‘bottom line’ destroys people’s ability to think strategically. The Economist described this as “playing tennis by watching the scoreboard instead of the ball.” ..  Fed only abstractions, managers can construct nothing but hazy images, poorly focused snapshots that clarify nothing.

The performance of forecasting was also woeful, little better than the ancient Greek belief in the magic of the Delphic Oracle, and “done for superstitious reasons, and because of an obsession with control that becomes the illusion of control. ”

Of course, to create a new vision requires more than just soft data and commitment: it requires a mental capacity for synthesis, with imagination. Some managers simply lack these qualities – in our experience, often the very ones most inclined to rely on planning, as if the formal process will somehow make up for their own inadequacies. … Strategies grow initially like weeds in a garden: they are not cultivated like tomatoes in a hothouse.

Highly analytical approaches often suffered from “premature closure.”

.. the analyst tends to want to get on with the more structured step of evaluation alternatives and so tends to give scant attention to the less structured, more difficult, but generally more important step of diagnosing the issue and generating possible alternatives in the first place.

So what does strategy require?

We know that it must draw on all kinds of informational inputs, many of them non-quantifiable and accessible only to strategists who are connected to the details rather than detached from them. We know that the dynamics of the context have repeatedly defied any efforts to force the process into a predetermined schedule or onto a predetermined track. Strategies inevitably exhibit some emergent qualities, and even when largely deliberate, often appear less formally planned than informally visionary. And learning, in the form of fits and starts as well as discoveries based on serendipitous events and the recognition of unexcited patterns, inevitably plays a role, if not the key role in the development of all strategies that are novel. Accordingly, we know that the process requires insight, creativity and synthesis, the very thing that formalization discourages.

[my bold]

If all this is true (and there is plenty of evidence to back it up), what does it mean for formal analytic processes? How can it be reconciled with the claims of Meehl and Kahneman that statistical models hugely outperform human experts? I’ll look at that next.

Deeper differences on Ukraine

This is an important observation from Timothy Garton Ash the other day on Ukraine:

Russia’s strongman garners tacit support, and even some quiet plaudits, from some of the world’s most important emerging powers, starting with China and India.

What explains that?

What the west faces here is the uncoiling of two giant springs. One, which has been extensively commented upon, is the coiled spring of Mother Russia’s resentment at the way her empire has shrunk over the past 25 years – all the way back from the heart of Germany to the heart of Kievan Rus.

The other is the coiled spring of resentment at centuries of western colonial domination. This takes very different forms in different Brics countries and members of the G20. They certainly don’t all have China’s monolithic, relentless narrative of national humiliation since Britain’s opium wars. But one way or another, they do share a strong and prickly concern for their own sovereignty, a resistance to North Americans and Europeans telling them what is good for them, and a certain instinctive glee, or schadenfreude, at seeing Uncle Sam (not to mention little John Bull) being poked in the eye by that pugnacious Russian. Viva Putinismo!

This is a quite different matter than accusations Obama or the EU have lost credibility. Western elites often fail to grasp other powers take a very different view of events, regardless of our own current actions, and may work to counteract some of our preferred legal and political values. Oh sure, you might say, we know that, it’s obvious in principle …….except the evidence shows we frequently forget it.

For example, consider Merkel’s assertion that Putin has “lost his grip on reality.” It’s not that we misunderstand his view or perceptions or motivations, you see, he’s clearly just gone nuts. Loo-la. With tanks. Or has he? It’s particularly hard for many EU elites to understand, whose entire project for three generations has been to dilute or pool sovereignty.

There’s two lessons: 1) people actually find it extremely hard to see events from different viewpoints, all the more so when they have prior commitments, or confront evidence their own policy hasn’t worked, or when important values and taboos are at stake. There are countless examples of foreign policy crises worsened by miscommunication and wrong assumptions. It happens to the most brilliant statesmen and accomplished leaders. You have to take this into account in crises. Indeed, it’s no different from central bank officials trying to understand bond traders, and vice versa.

To take just a few pieces of evidence, fifty years of work in social psychology since Leon Festinger has shown people have remarkable ability to ignore information which is dissonant with their current view. Phillip Tetlock’s more recent work also shows the most prominent experts are most often hedgehog thinkers who know “one thing” and one perspective -and that the track record of most country experts and intelligence agencies (and markets) on foreign crises is woeful.

It’s not that alternative views are necessarily justified, or right, or moral: but ignoring their existence rarely helps. The most difficult thing to get right in crises is usually not the facts on the ground so much as the prior facts in your head.

2) The international system is just that: a system, with both balancing and amplifying feedback loops. But the human mind has a natural tendency to want to see things in a straightforward, linear way. I’ll come back to issues of system dynamics soon, as another major alternative to the simplistic ideas about decision-making that regularly lead people towards failure.

 

The continuing rediscovery of what decision-makers actually do

Decision-makers should assemble all the evidence, reflect deeply on all the relevant data, calculate the most advantageous option using a rigorous model,  and then move decisively. Right?

No. In practice it often leads to ignoring crucial factors, and actually explains many of our policy and business disasters. I’ve been looking at how the rational-choice ideal of decision-making started cracking up in the 1970s, although it still dominates most theoretical understanding of decisions in economics, finance and big consultancy shops (mantra: “mutually exclusive, collectively exhaustive”) today.

One major problem is that it turns out most actual decision-makers – managers, political leaders, officers – don’t act this way.  People seem to keep rediscovering this fact every few years.

One of those major “discoveries” came in an article by a Canadian management researcher, Henry Mintzberg, in 1971, and in subsequent books and Harvard Business Review articles.  Fifty years of management study had gone wrong, he said, because it did not look at what managers (or Presidents) actually do with their day. Managers are not systematic, reflective planners. Instead, they mostly don’t attention to any one issue for more than nine minutes at a time.  They did not want aggregated, general information, coming from formal management information systems. Instead, they almost always wanted verbal information, often highly lacking in rigor. Formal reports most often get tossed aside.

Managers seem to cherish “soft” information, especially gossip, hearsay, and speculation. Why? The reason is its timeliness; today’s gossip may be tomorrow’s fact. The manager who is not accessible for the telephone call informing him that his biggest customer was seen golfing with his main competitor may read about a dramatic drop in sales in the next quarterly report. But then it’s too late. (HBR, 1975)

Because most of the information is verbal and never written down, it never gets entered into formal company systems in the first place. It is all locked inside managers’ brains. Because it is often scattershot and fragmented, it is also often superficial.

If there is a single theme that runs through this article, it is that the pressures of his job drive the manager to be superficial in his actions – to overload himself with work, encourage interruption, respond quickly to every stimulus, seek the tangible and avoid the abstract, make decisions in small increments, and do everything abruptly.

Note how this matches the “muddling through” style of decision-making I looked at here.  Actual decision-makers are forced by the pressures of the job to be inductive, rather than theoretical and deductive.  But they can be too superficial and wrapped up in the moment.

Of course, things have changed since the 1970s. There is more information than ever before, and more of it is available regardless of internal status and position.  The pace of decision-making has sped up. The ranks of middle management have been drastically culled, largely because of less need for bureaucratic layers to transmit communication from the pinnacle to the proles.

But, as we’ll see next, attempts to reform this messy inductive approach by stressing “rigor” and “planning” and “scientific methods” usually led to business catastrophe.

One thing remains constant: good managers and decision-makers are curious and outward-looking. I’ve found that the best national policymakers I’ve talked to are the actively curious ones, who want to  test their ideas, hear other perspectives, and figure out why people see things differently. It’s a matter of sensitivity to change, rather than meeting the budget plan numbers or formal model.

 

2017-05-11T17:32:44+00:00 April 18, 2014|Adaptation, Decisions, Foxes and Hedgehogs|

Two kinds of quantification – and mindsets

Paul Krugman was calling for “hard thinking” the other day. Discussion about decisions can sometimes break down into a shouting march between those who want “hard”, quantified, consistent, rigorous approaches, and those who want “soft” attention to attention, context, social influence and limits.  (You can also see it reflected in wider debates about big data, or the merits of journalism, or the role of science.)

Modern economics is nothing if not highly mathematical, and there are always broader demands for more “rigor” and “quantification.”

But there is a big difference between “rigorous” and “quantitative.”  I was also just talking about Herbert Simon, the towering intellectual pioneer who won the Nobel Prize for Economics in 1978, and also founded much of modern cognitive psychology and software engineering. Simon was a brilliant mathematician, but he rejected the demands for “rigor.”

For me, mathematics had always been a language for thought. Mathematics – this sort of non-verbal thinking – is my language of discovery. It is the tool I use to arrive at new ideas. This kind of mathematics is relatively unrigorous, loose, heuristic. Solutions reached with its help have to be checked for correctness. It is physicist’s mathematics or engineer’s mathematics, rather than mathematician’s mathematics.

Economics as a discipline has largely adopted mathematicians’ mathematics, with its aesthetic belief in elegance and rigor and consistency. Simon argued the point repeatedly with some of the giants of mainstream economics, like Tjalling Koopmans and Kenneth Arrow.

For Tjalling Koopmans, it appeared, mathematics was a language of proof. It was a safeguard to guarantee that conclusions were correct, that they could be derived rigorously. Rigor was essential. (I heard the same views, in even more extreme form, expressed by Gerard Debreu, and Kenneth Arrow seems mainly to share them.) I could never persuade Tjalling that ideas are to be arrived at before their correctness can be guaranteed – that the logic of discovery is quite different to the logic of verification. It is his view, of course, that prevails in economics today, and to my mind it is a great pity for economics and the world that it does.

This brings up a deeper point about different mindsets. I argued here that “hedgehogs” are deductive by nature, in the same way Euclid was as far back as 290 BC. But modern science did not really begin until a more inductive “fox-like” experimental approach was adopted in the 17th century. Science is inductive. Formal mathematics is deductive.

Quantification straddles the divide between foxes and hedgehogs, on a different axis. You can be mathematical and quantitative – but interested in observation and approximation, rather than proof and consistency.  You can adopt engineer’s math, and make the bridge stand up. Quantification does not necessarily entail “rigor” in the math department sense. Instead, you might want to go measure something in the first place.

When hedgehogs become policymakers, elegance and rigor become serious problems because they inhibit inductive learning.

2017-05-11T17:32:45+00:00 March 28, 2014|Expertise, Foxes and Hedgehogs, Quants and Models|