Home/Systems and Complexity

Systems and Complexity

The right kind of excitement about AI

We are currently at peak excitement about Artificial Intelligence (AI), machine learning and data science. Can the new techniques fix the rotted state of Economics, with its baroque models that failed during the financial crisis? Some people suggest that the economics profession needs to step aside in favor of those are actually have real math skills.  Maybe Artificial Intelligence can finally fix economic policy, and we can replace the Federal Reserve (and most of financial markets) with a black box or three?

 

It’s not that easy, unfortunately. There is a hype cycle to these things. Hopes soar (as expectations for AI did as far back as the late 1960s.) Inflated early expectations get dashed. There is a trough of disillusionment, and people dismiss the whole thing as a fad which entranced the naive and foolish.  But eventually the long term results are substantive and real – and not always what you expected.

 

The developments in data science are very important, but it’s just as important to recognize the limits and the lags involved. The field will be damaged if people overhype it too much, or expect too much too soon.  That has often happened with artificial intelligence (see Nils Nilsson’s history of the field, The Quest for Artificial Intelligence.)

People usually get overconfident about new techniques. As one Stanford expert put it in the MIT Technology Review a few weeks ago,

For the most part, the AI achievements touted in the media aren’t evidence of great improvements in the field. The AI program from Google that won a Go contest last year was not a refined version of the one from IBM that beat the world’s chess champion in 1997; the car feature that beeps when you stray out of your lane works quite differently than the one that plans your route. Instead, the accomplishments so breathlessly reported are often cobbled together from a grab bag of disparate tools and techniques. It might be easy to mistake the drumbeat of stories about machines besting us at tasks as evidence that these tools are growing ever smarter—but that’s not happening.

 

I think if there’s better AI models in economics it won’t be machine learning – like pulling a kernel trick on a support vector machine – so much as a side-effect of new kinds of data becoming cheap and easy to collect. It will be more granular instantaneous data availability, not applying hidden Markov models or acyclic graphs. Nonlinear dynamics is still too hard, such models are inherently unpredictable, and in the case of the economy has a tendency to change if observed (Goodhart’s law with  a miaow from Heisenberg’s cat).

 

None of that new data will help, either,  if people don’t notice the data because they are overcommitted to their existing views. Data notoriously does not help change people’s minds as much as statisticians think.

 

AI has made huge strides on regular repetitive tasks, with millions or billions of repeatable events. I think it’s fascinating stuff. But general AI is as far away as ever. The field has a cycle of getting damaged when hopes run ahead of reality in the near term, which then produces an AI winter. Understanding a multi trillion dollar continental economy is much much harder than machine translation or driving.

 

There’s also a much deeper reason why machine learning isn’t a panacea for understanding the economy.  A modern economy is a complex dynamic system, and such systems are not predictable, even in principle.  Used correctly, machine learning can help you adapt, or change your model. But it is perhaps much more likely to be misused because of overenthusiastic belief it is a failsafe answer. Silicon Valley may be spending billions on machine learning research, with great success in some fields like machine translation, But there’s far less effort to spot gaps and missing assumptions and lack of error control loops – and that’s what interests me.
2017-05-31T15:12:24+00:00 May 31, 2017|AI and Computing, Forecasting, Quants and Models, Systems and Complexity|

How side-effects drive history (and Brexit)

It’s often the side-effects of decisions, mostly overlooked at the time, which turn out to be most significant. Polls are showing a significant lead for Brexit this morning, which would be one of the biggest geopolitical shocks of the decade. Of course, trusting polls has been a bad idea in recent times, and there are many who think the UK will draw back at the last moment. But let’s say there is at least some chance Britain may exit. How did this happen? It’s a chain of side-effects.

In October 1973, Syria and Egypt launched a surprise attack against Israel. The US supplied arms to Israel to defend itself.

One side-effect was an oil embargo by Arab oil producers against Western countries, which led to the first oil shock and a quadrupling in the price of oil. That naturally led to a huge transfer of wealth to the oil producers, including Saudi Arabia.

One side-effect was a huge increase in the influence and power of the Saudis, one of the most backward and retrogressive parts of the Islamic world, with “kings” allied to perhaps the most puritanical, backward religious sect in all of Islam. It as if in the United States Bo and Luke Duke suddenly become  multibillionaire monarchs, kept in power by paying billions to the Ku Klux Klan every year.

One side-effect was many billions of dollars were paid by the Saudis to promote the least tolerant, most aggressive forms of Islam all around the world.

 

One side-effect was the rise of Islamic terrorism,  like (Saudi) Osama Bin Laden, who attacked American targets culminating in 9/11.

A side-effect of 9/11 was the US attacked Iraq, a secular dictatorship which had not been directly involved in the strike on the US.  The US won a decisive victory and overthrew Saddam Hussein much faster and with fewer losses than detractors forecast. Overconfident US officials removed Ba’ath party officials from the Iraqi government and Sunnis feared they would lose their traditional dominance of the country.

One side-effect was Iraq was destabilized and slid into a civil war that trapped the US for a decade, costing thousands of US military casualties and several trillion dollars which had not been anticipated.

One side-effect was the US public became wholly averse to more boots-on-the-ground in the Middle East. Another side-effect was the turmoil in Iraq eventually spread to Syria. But public resistance meant the US refused to commit military forces, as did the UK and other EU countries.

One side effect was a breakdown in Syria, and a huge wave of refugees that headed towards Europe. Angela Merkel believed setting no limits on refugee numbers was a moral choice, and over a million refugees flooded into Germany.

One side-effect was that European public opinion became agitated and alarmed about the migrant influx, which appears to many to be accelerated by the EU’s open borders under the Schengen agreement.  Populists who had already made advances over the previous decade suddenly benefitted from a new resurgence of public support. Meanwhile, Merkel dealt with concerns about Syrian migration by promising visa-free entry to Europe to Turks. It appeared to many that she and the EU had lost control of borders.

One side-effect was that immigration began to dominate over economic consequences in the UK Brexit debate, a focus that boosted the “Leave” side in the final weeks before the vote. The British, already dealing with heavy migration from within the EU,  feared they could not control their borders.

So a “leave” vote in Britain is now possible, partly caused by a civil war thousands of miles away in Damascus and Egyptian attacks on the Sinai forty years ago.

And one side effect may be similar referendums in other countries and a partial break up of the EU itself.  In retrospect it is possible that Merkel’s decision to admit refugees without limit may have as a side-effect unintentionally wrecked seven decades of German promotion of EU integration.

Of course, you can dispute the exact causation and many other factors were involved too.  You could  easily construct other chains of unintended side-effects and argue it different ways, and it’s partially a game. The point, however, is that direct choices and intentions and calculations are only a small part of what happens in the international economy and international affairs. It’s often the side-effects that matter most. Chains of cause and consequence  quickly get too involved and intricate for anyone to figure out, often even in retrospect let alone predicting the future.

I’ve often argued overconfident prediction is usually a sign of self-delusion.  In fact,  it’s often the things that don’t even occur to us to predict that matter most, not even just the things we recognize we get wrong.

So it’s not your models or forecasts or ideologies that matter. Instead, it is being on the lookout for side-effects and unintended consequences, especially those you’d prefer not to see at all. If you see them, you can at least try to do something about them. Instead, most elites blunder forward blindly, clinging to their preferred model and plans.

 

 

 

2017-05-11T17:32:40+00:00 June 11, 2016|Assumptions, Current Events, Decisions, Systems and Complexity|

Volkswagen: What were they thinking?

It may turn into one of the most spectacular corporate disasters in history. What were Volkswagen thinking? Even after it became apparent that outsiders had noticed a discrepancy in emissions performance in on-the-road tests, the company still kept stonewalling and continued to sell cars with the shady software routines.

We won't know the murky, pathological details for a while. But understanding how this happens is urgent. If you ignore this kind of insidious problem, billion-dollar losses and criminal prosecutions can occur.

In fact, it's usually not just one or two “bad apples,” unethical criminals who actively choose stupid courses of action, although it often suits politicians and media to believe so. It's a system phenomenon, according to some of the classic studies (often Scandanavian) like Rasmussen and Svedung.

.. court reports from several accidents such as Bhopal, Flixborough, Zeebrügge, and Chernobyl demonstrate that they have not been caused by a coincidence of independent failures and human errors. They were the effects of a systematic migration of organizational behavior toward accident under the influence of pressure toward cost-effectiveness in an aggressive, competitive environment.

It's not likely anyone formally sat down and did an expected utility calculation, weighting financial and other benefits from installing cheat software, versus chances of being found out times consequent losses. So the usual way of thinking formally about decisions doesn't easily apply.

It's much more likely that it didn't occur to anyone in the company to step back and think it through. They didn't see the full dimensions of the problem. They denied there was a problem. They had blind spots.

It can often be hard to even find any point at which decisions were formally made. They just … happen. Rasmussen & co again:

In traditional decision research ‘decisions’ have been perceived as discrete processes that can be separated from the context and studied as an isolated phenomenon. However, in a familiar work environment actors are immersed in the work context for extended periods; they know by heart the normal flow of activities and the action alternatives available. During familiar situations, therefore, knowledge-based, analytical reasoning and planning are replaced by a simple skill- and rule-based choice among familiar action alternatives, that is, on practice and know-how.

Instead, the problem is likely to be a combination of the following:

  • Ignoring trade-offs at the top. Major accidents happen all the time in corporations because often the immediate benefits of cutting corners are tangible, quantifiable and immediate, while the costs are longer-term, diffuse and less directly accountable. They will be someone else's problem. The result is longer-term, more important goals get ignored in practice. Indeed, to define something as a technical problem or set strict metrics often embeds ignoring a set of trade-offs. So people never think about it and don't see problems coming.
  • Trade-offs can also happen because general orders come from the top – make it better, faster, cheaper and also cut costs – and reality has to be confronted lower down the line, without formally acknowledging choices have to be made. Subordinates have to break the formal rules to make it work. Violating policies in some way is a de facto requirement to keep your job, and then it is deemed “human error” when something goes wrong. The top decision-maker perhaps didn't formally order a deviation: but he made it inevitable. The system migrates to the boundaries of acceptable performance as lots of local, contextual decisions and non-decisions accumulate.
  • People make faulty assumptions, usually without realizing it. For example, did anyone think through how easy it to conduct independent on-the-road tests? That was a critical assumption on whether they would be found out.
  • If problems occur, it can become taboo to even mention them, particularly when bosses are implicated. Organizations are extremely good at not discussing things and avoiding clearly obvious contrary information. People lack the courage to speak up. There is no feedback loop.
  • Finally, if things do wrong, leaders have a tendency to escalate, to go for double -or-quits. And lose.

There scarcely seems to be a profession or industry or country without problems like this. The Pope was just in New York apologizing for years of Church neglect of the child abuse problem, for example.

But that does not mean that people are not culpable and accountable and liable for things they should have seen and dealt with. Nor is it confined to ethics or regulation. It is also a matter of seeing opportunity. You should see things. But how? That's what I'm interested in.

It's essential for organizational survival to confront these problems of misperceptions and myopia. They're system problems. And they are everywhere. Who blows up next?

Don’t use a tame approach to “wicked” problems

People have a pervasive tendency to substitute simple questions for difficult ones, as Kahneman points out in Thinking, Fast and Slow. Even worse, they most often don’t even realize it. Our fast  instinctive thinking  “finds a related question that is easier and will answer it,” he says, most often without us even being aware of it.

This is one reason policy outcomes and market returns and business decisions often turn out very badly. It is very difficult to be get the right answer if you’re solving the wrong problem.

So part of any sensible approach to a decision is to stop, think, and  recognize what kind of problem you are dealing with. Here’s one good way to look at it,  invented in a famous paper by Horst Rittel and Melvin Webber in 1973.  Up until the 1960s, they say, people had a great deal of confidence that experts and scientists could solve almost all the problems of society. But once the easy problems had been solved, the more difficult and stubborn ones remained.  They argue:

The problems that scientists and engineers have usually focused upon are mostly “tame” or “benign” ones. As an example, consider a problem of mathematics, such as solving an equation; or the task of an organic chemist in analyzing the structure of some unknown compound; or that of the chessplayer attempting to accomplish checkmate in five moves. For each the mission is clear. It is clear, in turn, whether or not the problems have been solved.

Wicked problems, in contrast, have neither of these clarifying traits; and they include nearly all public policy issues–whether the question concerns the location of a freeway, the adjustment of a tax rate, the modification of school curricula, or the confrontation of crime.

They identify ten characteristics of a wicked problem, including:

  1. There is no definitive formulation of a wicked problem.
  2. Wicked problems have no “stopping rule”, i.e you can’t be sure when you have actually found a perfect solution. You stop when you have run out of time, money, or when you decide an approach is “good enough.” In other words, you can’t “solve” the bond market once and for all.
  3. Solutions to wicked problems are not true-or-false, but good-or-bad.
  4. There is no immediate and no ultimate test of a solution to a wicked problem, i.e There are so many possible repercussions and connections that “you can never trace all the waves through all the affected lives ahead of time or within a limited time span.”
  5. Every solution to a wicked problem is a “one-shot operation”; because there is no opportunity to learn by trial-and-error, so every attempt counts significantly

Skipping over a few,

8. Every wicked problem can be considered to be a symptom of another problem.

10. “The planner has no right to be wrong.” By this they mean the decision-maker is going to be held liable for the consequences, and pay a price if it turns out badly.

Are you starting to see the similarities here?  Financial markets have reached the same point.  Public policy reached that point decades ago, which helps to explain why monetary policy has so often been a tale of alternating complacency and disaster.

The “tame” problems have largely been solved, the ones that are easily quantified and reducible to tractable models or algorithms. Algorithms have automated some of the learning process and squeezed out whatever value there is in the big data sets. The low-hanging fruit has been picked.

Result: there isn’t much profit or alpha left to exploit that way. That leaves the “wicked” problems. Public policy issues, like monetary policy or Greek defaults or Chinese politics, are “wicked” issues.  And central banks stumble when they try to apply tame approaches to their wicked problems as well.

Compare the similar distinction between puzzles and mysteries, or linear and nonlinear system problems.

The first step in dealing with wicked problems is realize a tame approach won’t work.

 

2017-05-11T17:32:42+00:00 February 27, 2015|Assumptions, Decisions, Systems and Complexity|

How Politics can go Loopy

The midterm elections today will likely just produce the usual cyclical swing against the party in power.  The national debate has been particularly arid this year, largely focused on targeted messages to mobilize the base instead of changing people’s minds.

But much of the difference between people, and points of view, is not about the direct or immediate effects of particular policies, anyway. It’s not about immediate facts, or even always about immediate interests. According to Robert Jervis, in System Effects: Complexity in Political and Social Life,

At the heart of many arguments lie differing beliefs – or intuitions- about the feedbacks that are operating. (my bold)

It’s because, as saw before, most people find it very hard to think in systems terms. Politicians are aware of indirect effects, to be sure, and often present that awareness as subtlety or nuance. But they usually seize on one particular story about one particular indirect feedback loop, instead of recognizing that in any complex system there are multiple positive and negative loops. Some of those loops improve a situation. Some make it worse, or offset other effects. Feedback effects operate on different timescales and different channels. Any particular decision is likely to have both positive and negative effects.

The question is not whether one particular story is plausible, but how you net it all together.

Take the example of Ebola again. The core of the administration case was that instituting stricter visa controls or quarantine in the US might have the indirect effect of making it harder to send personnel and supplies to Africa, and containing the disease in Africa was essential.

That is likely true. It is  story which seems coherent and plausible. But there is generally no attempt to identify, let alone quantify or measure other loops which might operate as well, including ones with a longer lag time. Airlines may stop flying to West Africa in any case if their crew fear infection, for example. Reducing the chance of transmission outside West Africa might enable greater focus of resources or experienced personnel on the region. More mistakes in handling US cases (as apparently happened in Dallas) might significantly undermine public trust in medical and political authorities. You can imagine many other potential indirect effects.

The underlying point is this: simply identifying one narrative, one loop is usually incomplete.

Here’s another example, at the expense of conservatives this time. Much US foreign policy debate effectively revolves around “domino theory”, and infamously so in the case of the Vietnam war.  The argument from hawks in the 1960s was that if South Vietnam fell, other developing countries would also fall like dominoes. So even though Vietnam was in itself of little or no strategic interest or value to the United States, it was nonetheless essential to fight communism in jungles near the Laos border –  or before long one would be fighting communism in San Francisco. Jervis again:

More central to international politics is the dispute between balance of power and the domino theory: whether (or under what conditions) states balance against a threat rather than climbing on the bandwagon of the stronger and growing side.

You can tell a story either way: a narrative about positive feedback (one victory propels your enemy to even more aggression) or balancing feedback (enemies become overconfident and overstretch, provoke coalitions against them, alienate allies and supporters, or if we act forcefully it will produce rage and desperation and become a “recruiting agent for terrorism”.)

The same applies to the current state of the Middle East, where I have a lively debate going with some conservative friends who believe that the US should commit massive ground forces to contain ISIS in the Middle East, or “small wars will turn into  big wars.”  It’s in essence a belief that positive feedback will dominate negative/balancing feedback, domino-style.

But you can’t just assume such a narrative will play out in reality. South Vietnam did fall, after all But what happened was that the Soviet Union ended up overreaching in adventures like the invasion of Afghanistan. The other side collapsed.

The lure of a particular narrative, of focusing on one loop in a system, is almost overwhelming for many people, however. It’s related to the tendency to seize on one obvious alternative in decisions, with limited or no search for better or more complete or relevant alternatives.

The answer is not to just cherry pick particular narratives about feedback loops and indirect effects which happen to correspond with your prior preferences. That usually turns into wishful thinking and confirmation bias. Instead, you need to get a feel for the system as a whole, and have a way to observe and measure and test all (or most of ) the loops in operation.

“People court failure in predictable ways.”

The basic driving necessity for me is to ask why so many decisions go wrong – in government, in business, in markets – and what can be done about it.

One answer is definitely wrong. Very few senior decision-makers need more information. They are drowning in it already. They don’t have the time to absorb it. They have to fight multiple urgent fires. They seldom have the chance to step back and look at the bigger picture, and theories about the economy and markets have proved highly unreliable in any case.

Another answer is not to think about how decisions go wrong at all.  People can often take a fatalistic attitude to problems, especially when they look at current events filled with economic policy failure, market misapprehension and instability, and geopolitical tension and war.  But that guarantees failure. And there are positive ways to lift the game.

So what is going to move the needle on performance? Recognizing patterns that lead to flawed decisions and mistakes. Putting markers down which alert you to turning points and thresholds. We need to look at process and how people react to evidence and arguments.

And there’s fifty years of research that has been done on that, very little of which has become more widely known.  One example is Dietrich Dörner’s classic The Logic of Failure.

Failure does not strike like a bolt from the blue; it develops gradually according to its own logic. As we watch individuals attempt to solve problems, we will see that complicated situations seem to elicit habits of thought that set failure in motion from the beginning.  From that point, the complexity of the task and the growing apprehension of failure encourage methods of decision-making that make failure even more likely, and then inevitable. We can learn, however. People court failure in predictable ways.

The starting point for Dörner is not bias. It is complexity, and the predictable ways in which people fail to deal adequately with it. People think in straight lines  but the world works in terms of loops. It is those same dynamic loops that cause problems for many linear algorithms and models, and leave an essential opening for humans to make a difference.

 

2017-05-11T17:32:43+00:00 September 1, 2014|Decisions, Human Error, Systems and Complexity|

System Blindness

Good news: GDP grew at 4% and the winter surprise has faded. As usual, there is endless analysis available for free. These days we swim in a bottomless ocean of economic commentary.

Let’s turn to something that might give people an edge in making decisions instead.  One of the main reasons people and companies get into trouble is that they don’t think in terms of systems. I noted one major source of this approach was Jay Forrester’s work at MIT beginning in the 1960s. His successor at MIT is John Sterman, who calls it system blindness.

Sterman documents the multiple problems that decision-makers  have  dealing with dynamic complexity in his best-known shorter paper. We haven’t evolved to deal with complex systems, he says. Instead, we are much quicker to deal with things with obvious, direct, immediate, local causes. (See bear. Run.)

So people have inherent deep difficulties with feedback loops, for example.

Like organisms, social systems contain intricate networks of feedback processes, both self-reinforcing (positive) and self-correcting (negative) loops. However, studies show that people recognize few feedbacks; rather, people usually think in short, causal chains, tend to assume each effect has a single cause, and often cease their search for explanations when the first sufficient cause is found. Failure to focus on feedback in policy design has critical consequences.

As a result, policies and decisions often become actively counterproductive, producing unexpected side-effects and counter-reactions. Such ‘policy resistance’ means major decisions frequently have the opposite effect to that intended (such as building major roads but producing even more congestion, or suppressing forest fires and producing much bigger blazes.)

People also have serious problems understanding time and delays, which often leads to oversteer at the wrong times and wild oscillation and swings.  They have difficulty with short-term actions that produce long-term effects. They assume causes must be proportionate to effects. (Think of the long and variable lags in monetary policy, and the tendency to oversteer.)

Decision-makers have problems with  stocks and flows. In essence, a stock is the water in the bath. A flow is the water running from the tap.

People have poor intuitive understanding of the process of accumulation. Most people assume that system inputs and outputs are correlated (e.g., the higher the federal budget deficit, the greater the national debt will be). However, stocks integrate (accumulate) their net inflows. A stock rises even as its net inflow falls, as long as the net inflow is positive: the national debt rises even as the deficit falls—debt falls only when the government runs a surplus; the number of people living with HIV continues to rise even as incidence falls—prevalence falls only when infection falls below mortality. Poor understanding of accumulation has significant consequences for public health and economic welfare.

People also fail to learn from experience, especially in groups. They don’t test beliefs. Instead, they see what they believe, and believe what they see. They use defensive routines to save face. They avoid testing their beliefs, especially in public.

Note that these are not the problems that are getting prime attention in behavioral economics., let alone mainstream economics. Why don’t system ideas get more attention? Sterman notes that more generally, people often fail to learn from hard evidence.

More than 2 and one-half centuries passed from the first demonstration that citrus fruits prevent scurvy until citrus use was mandated in the British merchant marine, despite the importance of the problem and unambiguous evidence supplied by controlled experiments.

For me, one additional major reason might be we are generally so used to the analytic approach: i.e.  break things down into their component parts and examine each separately. This has worked extremely well for decades in science and business, when applied to things which don’t change and adapt all the time. Instead, systems thinking is about looking at the interaction between elements. It is synthesis, “joining the dots”, putting the pieces together and seeing how they work and interrelate in practice.

And that might be an additional explanation for the hedgehog versus fox distinction. You recall the fundamentally important research that finds that foxes, “who know many things”, outperform hedgehogs “who know one big thing”  at prediction and decision. Hedgehogs are drawn more to analysis and universal explanation; foxes are drawn more to synthesis and observation.

As a result, hedgehogs have much greater difficulty with system thinking. Foxes are more likely to recognize and deal with system effects. If you confront a complex adaptive system (like the economy or financial markets,) that gives foxes an edge.

 

 

Blaming the System: Jay Forrester and System Dynamics

People usually resist changing their view in response to evidence for a very long time, even when they can suffer catastrophic damage as a result. It can sometimes take decades for essential ideas to filter through to the point where people actually use them. It’s taken almost forty years for Kahneman and Tversky’s initial papers in the early 1970s to filter through to wider public acceptance and bestselling books, for example.

No wonder it often takes months for Fed communications to sink in with the market. It’s essential to focus on how people frame issues and how long it takes them to change their mind.

I’ve looked at some other perspectives recently which have still not reached even the degree of awareness that Kahneman currently enjoys. Charles Lindblom argued that in practice most significant policy decisions are made by incremental “muddling through,” not the rational choice approach taught in economics and business courses and mostly believed by markets. Herbert Simon examined the boundary conditions of rational action, and though a Nobel Prize-winner in Economics, disputed much of the way the profession saw the world. Henry Mintzberg (re)discovered that successful managers rarely rely on formalized decision or planning systems, and indeed attempts at rigorous planning have most often led to disaster in corporations and government. It adds up to a whole zoo of blind spots waiting to entrap decision-makers.

And I haven’t even touched yet on perhaps the single most important reason why policymakers and markets frequently make major errors: a lack of systems thinking. Instead of reductiviely breaking things down into component parts, systems thinking focuses on the connections and relationships and feedback loops of a live system as a whole. It looks and stocks and flows, and lags and delays, and adaptation and complexity.

The orginal impetus came from from MIT mathematician Norbert Weiner and Austrian biologist Ludwig Von Bertalanffy in the 1930s and 1940s. But one of the most important contributions came from an MIT engineer and management expert, Jay Forrester. In a 1971 paper, The Counterintuitive Behavior of Social Systems, he argued:

The human mind is not adapted to interpreting how social systems behave. Social systems belong to the class called multi-loop nonlinear feedback systems. In the long history of evolution it has not been necessary until very recent historical times for people to understand complex feedback systems. Evolutionary processes have not given us the mental ability to interpret properly the dynamic behavior of those complex systems in which we are now imbedded.

Indeed, current economics almost completely ignores nonlinear dynamic behavior in favor of linear comparative statics. One reason is the math is too hard, and does not produce neat solutions.

But treating a complex system, like almost all political and market and business systems, as if they are simple linear systems can produce painful error and blowback. Says Forrester:

…. social systems are inherently insensitive to most policy changes that people choose in an effort to alter the behavior of systems. In fact, social systems draw attention to the very points at which an attempt to intervene will fail. Human intuition develops from exposure to simple systems. In simple systems, the cause of a trouble is close in both time and space to symptoms of the trouble. If one touches a hot stove, the burn occurs here and now; the cause is obvious. However, in complex dynamic systems, causes are often far removed in both time and space from the symptoms.

One result is policy resistance; systems react in unanticipated and often opposite ways to simplistic intervention or expectations.

 

2017-05-11T17:32:43+00:00 July 22, 2014|Decisions, Systems and Complexity|