Home/Decisions

When to dump a leader (Pelosi edition)

Many “leaders” have a tendency to  think that they ought to keep doing the same thing, but with more “passion,” or intensity, or resources. As I said in the post below, however, optimizing is not the same as adapting to a changed situation. There are many situations in which more persistence and determination just get you more trapped in doing the wrong thing. It’s essential tor recognize them. Most people don’t.

The unfortunate consequence is that change then requires a change in leadership as well. Maybe the Democratic Party is realizing that after multiple defeats: “Pelosi’s Democratic Critics Plot to Replace Her.” If things are persistently not working, try someone with a fresh look.

That also means that if you’re a leader it’s better to look for ways to adapt or change your mind before people plot to remove you after a massive setback.  The oldest danger of leadership is woodenheadedness. Yet most leaders hire consultants to put a theoretical or quantitative veneer on what they already think.

2017-06-22T15:33:53+00:00 June 23, 2017|Adaptation, Decisions, Human Error, Organizational Culture and Learning|

How academics and practitioners think differently

Here is an excellent article at The American Interest on the differences between how policymakers and academics think about international relations in the US. Some of these differences carry very important implication for policy. In general, scholars have (not surprisingly) drifted away from practical concerns which limits their influence, author Hal Brands says.

International relations scholars—particularly political scientists—increasingly emphasize abstruse methodologies and write in impenetrable prose. The professionalization of the disciplines has pushed scholars to focus on filling trivial lacunae in the literature rather than on addressing real-world problems.

But practitioners and scholars also take very different positions on some substantive issues.  Practitioners are more concerned with American interests, while academics think more as “global citizens” or the stability of the system as a whole.  Interestingly, one particular point of difference is attitudes to credibility.

Since the early Cold War, U.S. policymakers have worried that if Washington fails to honor one commitment today, then adversaries and allies will doubt the sanctity of other commitments tomorrow. Such concerns have exerted a profound impact on U.S. policy; America fought major wars in Korea and Vietnam at least in part to avoid undermining the credibility of even more important guarantees in other parts of the globe. Conversely, most scholars argue credibility is a chimera; there is simply no observable connection between a country’s behavior in one crisis and what allies and adversaries expect it will do in the next.

This is clearly extremely important.  I have more sympathy with the scholars on this one: many of the worst policy errors have been caused by “domino theories” of credibility.

It is also interesting that there is a gap at all between practitioners and academics in foreign policy. In economic policy, the academics largely captured policy, certainly in the US, in the last two decades. That naturally carries with it a certain style of thinking – and the outcome has been anything but encouraging, with enormous financial crises and volatility.

2017-06-06T14:26:46+00:00 June 6, 2017|Decisions, Expertise, Foreign Policy|

Let’s ban forecasts in central banks

People should learn from their mistakes, or so we usually all agree. Yet that mostly doesn’t happen. Instead, we get disturbing “serenity” and denial, and we had a prime example of it this week. So it is crucial we develop ways to make learning from mistakes more likely. I’d ban forecasts altogether in central banks if it would make officials pay more attention to what surprises them.

The most powerful institutions in the world economy can’t predict very well. But at least they could learn to adjust to the unexpected.

The Governor of the Bank of England, Mark Carney, testified before Parliament this week to skeptical MPs. The Bank, along with the IMF, Treasury, and other economists, predicted near-disaster if the UK voted for Brexit. So far, however, the UK economy is surprising everyone with its resilience.

So did Carney make a mistake? According to the Telegraph,

If Brexiteers on the Commons Treasury Committee were hoping for some kind of repentance, or at least a show of humility, they were to be sorely disappointed. Mr Carney was having none of it. At no stage had the Bank overstepped the mark or issued unduly alarmist warnings about the consequences of leaving, he insisted. He was “absolutely serene” about it all.

This is manifestly false and it did not go down well, at least with that particular opinion writer.

Arrogant denial is, I suppose, part of the central banker’s stock in trade. If a central bank admits to mistakes, then its authority and mystique is diminished accordingly.

I usually have a lot of regard for Carney, and worked at the Bank of England in the 1990s. But this response makes no sense. Central banking likes to think of itself as a technical trade, with dynamic stochastic general equilibrium models and optimum control theories. Yet the core of it has increasingly come down to judging subjective qualities like credibility, confidence, and expectations.

Economic techniques are really no use at all for this.  Credibility is not a technical matter of commitment, time consistency and determination, as economists often think since Kydland & Prescott. It is much more a matter of whether people consider you are aware of the situation and can balance things appropriately, not bind yourself irrevocably to a preexisting strategy or deny mistakes.  It is as much a matter of character and honesty as persistence.

The most frequent question hedge funds used to ask me about the Fed or other central banks was “do they see x?”  What happens if you are surprised? Will you ignore or deny it and make a huge mistake?  Markets want to know that central banks are alert, not stuck in a rut.  They want to know if officials are actively testing their views, not pretending to be omniscient. People want to know that officials aren’t too wrapped up in a model or theory or hiding under their desks instead of engaging with the real world.

It might seem as if denial is a good idea, at least in the short term. But it is the single most durable and deadly mistake in policymaking over the centuries. The great historian Barbara Tuchman called it “wooden-headedness,” or persistence in error.

The Bank of England, like other monetary authorities, issues copious Inflation Reports and projections and assessments. But it’s what they don’t know, or where they are most likely to miss something, which is most important. Perhaps the British press is being too harsh on Carney. Yet central banks across the world have hardly distinguished themselves in the last decade.

We need far fewer predictions in public policy, and far more examination of existing policy and how to adjust it in response to feedback. Forget about intentions and forecasts. Tell us what you didn’t expect and didn’t see, and what you’re going to do about it as a result. Instead of feedforward, we need feedback policy, as Herbert Simon suggested about decision-making.  We need to adapt, not predict. That means admitting when things don’t turn out the way you expected.

2017-05-11T17:32:35+00:00 September 10, 2016|Adaptation, Central Banks, Communication, Decisions, Economics, Forecasting, Time inconsistency|

The Toxic Impact of News

One advantage of summer travel is it gives you extra perspective on the media frenzy back home. I was in Kyoto, Japan during the party conventions, and from thousands of miles away the political reporting seemed even more overdramatized and pointless than usual. How much do we know now about the US Presidential race that we didn’t three months ago? How much does today’s political news cycle affect an election still more than two months away?

Very little.

Rolf Dobelli wrote a book, The Art of Thinking Clearly, that examined 99 biases. He saved what he thought was perhaps the most important for last.

We are incredibly well informed, yet we know incredibly little. Why? Because two centuries ago, we invented a toxic form of knowledge called “news.” News is to the mind what sugar is to the body: appetizing, easy to digest— and highly destructive in the long run.

News is irrelevant and a waste of time, he argues.

In the past twelve months, you have probably consumed about ten thousand news snippets— perhaps as many as thirty per day. Be very honest: Name one of them, just one that helped you make a better decision— for your life, your career, or your business— compared with not having this piece of news. No one I have asked has been able to name more than two useful news stories— out of ten thousand.

Of course, in financial markets there are plenty of people who obsessively track every small piece of information, although algorithms react to snippets of news far faster than any trader these days.

So what kind of information is useful? It is information that lets you solve problems, and that usually means information that helps you test your assumptions and approach. But testing assumptions is usually the last thing that people do. All that political reporting usually just confirms what people think they already know.

2016-08-30T10:58:49+00:00 August 30, 2016|Decisions|

The flaw in international law, and The Chilcot Report

People pay too much attention to their forecasts (which are unreliable) and too little to their assumptions, and that often gets them into serious trouble. I argued in the last post that the assumption driving much EU integration – that international law and international organization is the foundation of the last seventy years of peace in Europe – is not always true.

So what else may have kept the peace in Europe for the last seventy years? What worked, if international law sometimes doesn't work? Think for a moment.

It isn't the same as the question of what you think international law is ideal or moral aspiration or a nice idea, but, again, what actually works. We all know people who are wonderfully nice, but maybe should not be entrusted with arranging your summer trip, or running a company, or handling air traffic control for inbound flights at LaGuardia. You may think it is ideal and moral that everyone should be honest as well. But you probably locked your front door when you left home this morning too. So what actually kept the peace, if not the EU?

Might it have something to do with the US deploying hundreds of thousands of troops in Europe, a chain of air bases from Keflavik in Iceland to Incirlik in Turkey, and the Sixth Fleet in the Mediterranean? Not to mention the threat of thermonuclear escalation if anyone started a war. The US assumed much of the security of Europe, and strongly supported European rebuilding from the Marshall plan onwards, as well as the EU itself as a bulwark against communism. The Red Army might have been entirely unthreatening and peaceful and admired European law, but the citizens of Budapest and Prague who saw Soviet tanks on their streets in 1956 and 1967 might disagree. Yet western European countries could afford to reduce defense spending and focus on welfare and economics. In other words, the EU itself is more a symptom of the US stabilizing the security situation than the cause of security.

Let's say you splutter with outrage at the idea. There are definitely some people in Europe and elsewhere who are very uncomfortable with any positive consequence of Ameican foreign policy, ever. Fine. How would you test that? What kind of implications would you expect to see? The explanations lead very different places and feed different narratives. Seeing the question from different angles and questioning assumptions is usually essential to figuring out the right policy. And the things you feel uncomfortable about are the most likely place for blind spots, because you never look there.

In the same way, the reaction to the publication of the Chilcot report on British participation in the Iraq war was published yesterday. Most of the attention, like this Guardian editorial, is focused on poor prediction of consequences.

Let's agree the war was bad in retrospect. It is also clear that there was not enough effort to question the assumptions underlying intelligence assessments that Saddam Hussein still had weapons of mass destruction, or prepare for the aftermath.

But the press reaction doesn't really come to grips with a recurrent theme in the executive summary of the report. Why did Blair, a European multilateral liberal, stick so close to Bush, a Texan Republican? Was it to preserve the special relationship? Get invited to delightful Crawford, TX? Be a poodle and get dog biscuits?

Most media reactions lean towards thinking it was because Blair was a pathological liar, a vain foolish potential war criminal who ignored advice. They personalize the issue. But Blair was a highly skilled, highly popular leader before the war, not a cartoon villain, and he clearly had doubts about direct UK interests in Iraq. So what was he thinking?

In fact, Chilcot documents how Blair kept trying to push the US to go the multilateral route, to get UN resolutions, to persuade a coalition of allies rather than take unilateral action.

The report references a 2003 speech by Blair several times.

370. In Mr Blair’s view, the decision to stand alongside the US was in the UK’s long‑term national interests. In his speech of 18 March 2003, he argued that the handling of Iraq would:

“… determine the way in which Britain and the world confront the central security threat of the 21st century, the development of the United Nations, the relationship between Europe and the United States, the relations within the European Union and the way in which the United States engages with the rest of the world. So it could hardly be more important. It will determine the pattern of international politics for the next generation.”

In other words, it wasn't really about imminent threats from Iraq or whether it had WMD or supported terrorism for Blair. At best, those were fig leaves or PR concerns. It wasn't even primarily about the effect of disagreement on US-UK relations. It was to get the Americans to follow the norms of international law. It was to stop them acting outside the multilateral framework.

So consider this: international law didn't stop the Iraq war, because the Americans felt they couldn't rely on the UN framework. And Blair, as an internationalist progressive, went along to try to make sure the “pattern of international politics for the next generation” was based on international law and multilateral organizations. He tried to rein back American unilateral use of force by participating as a junior partner, to preserve international norms, albeit not enough for domestic opponents or some other EU governments.

So international law did not lead to peace but was the cause of at least UK participation in the Iraq war. Uncomfortable? Fine. But Blair might have stumbled into huge mistakes because of his assumptions. Forecasts and data and judgements got altered to fit them.

And that happens all the time.

 

2017-05-11T17:32:35+00:00 July 7, 2016|Assumptions, Decisions, Europe, Security|

How side-effects drive history (and Brexit)

It’s often the side-effects of decisions, mostly overlooked at the time, which turn out to be most significant. Polls are showing a significant lead for Brexit this morning, which would be one of the biggest geopolitical shocks of the decade. Of course, trusting polls has been a bad idea in recent times, and there are many who think the UK will draw back at the last moment. But let’s say there is at least some chance Britain may exit. How did this happen? It’s a chain of side-effects.

In October 1973, Syria and Egypt launched a surprise attack against Israel. The US supplied arms to Israel to defend itself.

One side-effect was an oil embargo by Arab oil producers against Western countries, which led to the first oil shock and a quadrupling in the price of oil. That naturally led to a huge transfer of wealth to the oil producers, including Saudi Arabia.

One side-effect was a huge increase in the influence and power of the Saudis, one of the most backward and retrogressive parts of the Islamic world, with “kings” allied to perhaps the most puritanical, backward religious sect in all of Islam. It as if in the United States Bo and Luke Duke suddenly become  multibillionaire monarchs, kept in power by paying billions to the Ku Klux Klan every year.

One side-effect was many billions of dollars were paid by the Saudis to promote the least tolerant, most aggressive forms of Islam all around the world.

 

One side-effect was the rise of Islamic terrorism,  like (Saudi) Osama Bin Laden, who attacked American targets culminating in 9/11.

A side-effect of 9/11 was the US attacked Iraq, a secular dictatorship which had not been directly involved in the strike on the US.  The US won a decisive victory and overthrew Saddam Hussein much faster and with fewer losses than detractors forecast. Overconfident US officials removed Ba’ath party officials from the Iraqi government and Sunnis feared they would lose their traditional dominance of the country.

One side-effect was Iraq was destabilized and slid into a civil war that trapped the US for a decade, costing thousands of US military casualties and several trillion dollars which had not been anticipated.

One side-effect was the US public became wholly averse to more boots-on-the-ground in the Middle East. Another side-effect was the turmoil in Iraq eventually spread to Syria. But public resistance meant the US refused to commit military forces, as did the UK and other EU countries.

One side effect was a breakdown in Syria, and a huge wave of refugees that headed towards Europe. Angela Merkel believed setting no limits on refugee numbers was a moral choice, and over a million refugees flooded into Germany.

One side-effect was that European public opinion became agitated and alarmed about the migrant influx, which appears to many to be accelerated by the EU’s open borders under the Schengen agreement.  Populists who had already made advances over the previous decade suddenly benefitted from a new resurgence of public support. Meanwhile, Merkel dealt with concerns about Syrian migration by promising visa-free entry to Europe to Turks. It appeared to many that she and the EU had lost control of borders.

One side-effect was that immigration began to dominate over economic consequences in the UK Brexit debate, a focus that boosted the “Leave” side in the final weeks before the vote. The British, already dealing with heavy migration from within the EU,  feared they could not control their borders.

So a “leave” vote in Britain is now possible, partly caused by a civil war thousands of miles away in Damascus and Egyptian attacks on the Sinai forty years ago.

And one side effect may be similar referendums in other countries and a partial break up of the EU itself.  In retrospect it is possible that Merkel’s decision to admit refugees without limit may have as a side-effect unintentionally wrecked seven decades of German promotion of EU integration.

Of course, you can dispute the exact causation and many other factors were involved too.  You could  easily construct other chains of unintended side-effects and argue it different ways, and it’s partially a game. The point, however, is that direct choices and intentions and calculations are only a small part of what happens in the international economy and international affairs. It’s often the side-effects that matter most. Chains of cause and consequence  quickly get too involved and intricate for anyone to figure out, often even in retrospect let alone predicting the future.

I’ve often argued overconfident prediction is usually a sign of self-delusion.  In fact,  it’s often the things that don’t even occur to us to predict that matter most, not even just the things we recognize we get wrong.

So it’s not your models or forecasts or ideologies that matter. Instead, it is being on the lookout for side-effects and unintended consequences, especially those you’d prefer not to see at all. If you see them, you can at least try to do something about them. Instead, most elites blunder forward blindly, clinging to their preferred model and plans.

 

 

 

2017-05-11T17:32:40+00:00 June 11, 2016|Assumptions, Current Events, Decisions, Systems and Complexity|

Believing your own propaganda

I argued in the last post that people with strong principles, like much of the GOP establishment, often end up with strong blindspots too. More educated groups frequently like to think in terms of abstract principles. But it often needs a bit of a leap to see how other people think about their ideals.  If you’re a liberal, for example, you might find it distasteful to imagine that the GOP establishment has any principles at all, rather than simply manipulating their followers to make themselves rich.

You could grammatically decline the thought pattern like this:

  • I selflessly act according to enlightened, timeless principles which are proven to enhance the common good.
  • You are a rigid ideologue who has got stuck in an interlocking spider web of false (if sincere) beliefs and can’t see the truth.
  • He or She is a hypocrite and  cynical manipulator who talks about principles and ideals only to conceal or distract people from their vicious plans and nasty self-interest.

Of course, depending who you ask, these can all refer to the same person. Your own lofty principles may look like the most base self-interest to others. That’s normal politics.

People’s beliefs are mostly sincere.  After all, people can enact beliefs and appeal to moral principles to justify their self-interest without even being aware they are doing so.

This, however, is the problem. Your own principles may bring you comfort, and reassurance, and serve as a motivating force for your own side, convincing your allies of they righteousness. They may fit together in a coherent, elegant way, and reinforce each other.

But they are usually not much good for persuading anyone else, who appeal to their own different principles. Nor, partly because of that fact,  is intense belief in your own principles much good for seeing potential problems or things that can go wrong, or things that don’t fit into your assumptions.

But much of politics, and sometimes business decision making,  is based on exactly such strong principles and ideologies. Consider “maximize shareholder value” as a business principle, for example.

It makes such belief frameworks  fragile and more likely to produce huge mistakes, because they don’t see things coming that don’t easily fit their neat framework.  It also makes it more difficult to find compromises.

To make things work or be successful, you have to be able to see gaps and flaws and potential problems, and do something about them.  It doesn’t mean the other side is right, or your own principles aren’t valuable.  But most of the pressure – to seem confident, in control, to have the right answers and the right moral stance – tends to blind people to the things which sneak up unseen and can wreck their plans and doom their hopes.

You just have to find a way to be alert.

 

2017-05-11T17:32:40+00:00 March 14, 2016|Assumptions, Decisions|

Woodenheaded Disasters

So we have a nervous start to the New Year, with a plunge in the Chinese stock market and tensions in the Gulf. There is a widespread sense that the establishment in many countries is “out of touch” and leadership is faltering. I was arguing the other day that there is often a deeper pattern to these problems. The Republican establishment in the US ignored all evidence that didn’t match their preconceptions until an anointed prince like Jeb Bush  was running at only 4% in primary polling.

This is in fact one of the deepest patterns in history. The great historian Barbara Tuchman pondered in The March of Folly: From Troy to Vietnam why policymakers and leaders so often do things which seem self-defeating and stupid.

A phenomenon notable throughout history regardless of place or period is the pursuit by governments of policies contrary to their own interests.. Why does intelligent mental process seem so often not to function?”

She called it Wooden-headedness.

Wooden headedness, the source of self deception is a factor that plays a remarkably large role in government. It consists in assessing a situation in terms of preconceived fixed notions while ignoring or rejecting any contrary signs.  It is acting according to wish while not allowing oneself to be deflected by the facts.

It is all too easy to decide a policy was wrong in retrospect, of course. But wooden-headedness means policies or decisions which are self-defeating and ruinous based on things which are clearly apparent at the time.  And it is remarkably common.  She traces it from ancient Greece through the policies of Renaissance Popes, Phillip II of Spain and the decision of the Japanese government to go to war with the United States in 1941.  US intervention in Vietnam in turn, led by the “best and the brightest” in the Kennedy administration, was beset by folly.

Folly’s appearance is independent of era or locality; it is timeless and universal, although the habits and beliefs of a particular time determine the form it takes.

One of the most interesting examples is the folly of the British government in its policy on its American colonies. The governing elite believed that trade with and possession of the thirteen colonies was utterly essential to Britain’s wealth and future, but insisted on the right to tax without the colonist’s consent.   It was, she says, the unworkable pursued at the expense of the possible.

Instead of confronting trade-offs or looking for alternatives, politicians in London were largely diverted by the game of faction, who’s in, who’s out.  And here is the most remarkable fact she notes: No British ministers visited America between 1763 and 1775 despite thinking the fate of the empire depended on possession of America.

There is often remarkable reluctance to go and look at the facts on the ground with a fresh eye. And it is not easy to do, whether it is sailing the Atlantic in the 1760s or working out why a business product is withering in the marketplace.  But it is also essential. Instead of prediction, it is a matter of taking a fresh look at what is already there. It is about discovering what you’re not seeing. It is about blind spots.

Most predictions about what will happen in 2016 will tun out to be wrong, of course. But at least we can try to look for contrary evidence and test assumptions about views, so we are not woodenheaded. Like most establishments and bureaucracies.

 

2017-05-11T17:32:40+00:00 January 4, 2016|Assumptions, Books, Decisions|

There is a deeper pattern in why the GOP Establishment is in trouble

“What went wrong?” is the underlying question in David Frum’s much-talked about piece in the Atlantic on the “Republican Revolt”.  How could an establishment candidate like Jeb Bush, who was expected to be almost irresistible and has raised more than $100 million now be running at 3-4% in the polls? How can the GOP primary race have been hijacked by a reality TV star at the expense of experienced Governors and Senators?

Let’s leave aside the betting on who will finally get the nomination, or how good or bad Trump is, as  most of the media focuses on little else and most journalistic speculation is essentially useless. To be sure, despite his consistent lead, the Donald may not be inevitable as the field thins out and the ‘ground game’ of turnout becomes important.

It’s just that as of now, some of the most powerful, elite  and supposedly expert people in US politics look like losers. This is really not where they want to be.

So let’s coolly step back and  look at the pattern. How could the establishment miscalculate so badly?  What does this tell us about why decisions go wrong? Could other elites have the same problem?

In a nutshell, people refused to see contrary evidence.

Many establishment policies were not popular with the GOP base, Frum says. Less than 17% favored cuts in social security, for example.  Most wanted more deportations of illegal immigrants, the exact opposite of a pathway to citizenship.

As a class, big Republican donors could not see any of this, or would not. So neither did the politicians who depend upon them. Against all evidence, both groups interpreted the Tea Party as a mass movement in favor of the agenda of the Wall Street Journal editorial page. One of the more dangerous pleasures of great wealth is that you never have to hear anyone tell you that you are completely wrong.

They could not see things that did not fit in their frame. They could not learn from errors or defeats. The establishment had been shocked at Romney’s loss in 2012, for example.

And yet, within hours of Romney’s defeat, Republican donors, talkers, and officials converged on the maximally self-exculpating explanation.

That meant Republican leaders decided the problem was Romney’s talk about more immigration enforcement alienating Latinos, the very issue where the establishment differed most with their base and where hard evidence of votes to be gained in the center was (Frum says)  mostly lacking.

Otherwise, the party yielded on nothing and doubled down on everything. No U-turns. No compromises.

Instead of adjusting to minimize or forestall the chance of a revolt, or finding a smart alternative way forward, the leadership  interpreted things in self-serving terms and escalated.

This, of course, is a problem that is extremely widespread and not confined to the GOP. We saw exactly the same thing on all sides in the last midterm elections.

Perhaps the establishment will be able to adapt now that their problem is (you would think) undeniable and it is darkest before dawn for them. Or they can double down again. But serious damage has been done, and some ground rules of US politics – like the importance of raising money – have been rewritten.

Here’s the takeaway. Once again we see in this example that the fundamental problem with decisions is not really bias, or lack of formal rigor, or failure to gather data. It’s that people most often  don’t change their mind in response to evidence. Or  they fail to adapt until so late in the game that all the choices are bad. That’s what we need to fix, and would save countless billions of dollars and tens of thousands of companies and careers.

The most brilliant investors intuitively realize this. But as this incident demonstrates, most leaders and managers and policymakers do not. They are surrounded by yes-men. They stick with the familiar. They are clever enough to explain away facts which do not fit their narrative.

People get stuck, and persist too long in self-delusion. They fail to adapt and move when they still have the chance. If you can mitigate that, you can do more than most crystal balls could ever do. After all, if the only thing you see in a crystal ball is your own wishful thinking, what good is it?

 

2017-05-11T17:32:40+00:00 December 27, 2015|Confirmation bias, Current Events, Decisions, Politics|

Rogues or Blind Spots?

I looked at how Volkswagen could go so wrong the other day. There is almost always a rush to blame human error or subordinates, I said. Some of them may be genuinely criminal and deserve jail time. But the problem is more usually also systemic: management doesn't see or want to see problems coming.

Now here's a piece in Harvard Business Review on the issue. Of course, it was rogue employees, says Volkswagen management.

Testifying unhappily before America’s Congress, Volkswagen of America CEO Michael Horn adamantly and defiantly identified the true authors of his company’s disastrous “defeat device” deception: “This was not a corporate decision. No board meeting or supervisory meeting has authorized this,” Horn declared. “This was a couple of rogue software engineers who put this in for whatever reason.”

Ach, du lieber! Put aside for the moment what this testimony implies about the auto giant’s purported culture of engineering excellence. Look instead at what’s revealed about Wolfsburg’s managerial oversight: utter and abysmal failure. No wonder Chairman and CEO Martin Winterkorn had to resign. His “tone at the top” let roguery take root.

The author is an MIT expert on the software processes at issue.

Always look to the leadership. Where were Volkswagen’s code reviews? Who took pride and ownership in the code that makes Volkswagen and Audi cars run? For digitally-driven innovators, code reviews are integral to healthy software cultures and quality software development.

Good code is integral to how cars work now, he says. And to write good code the Googles and Facebooks of the world have code review systems with some form of openness, even external advice or review, so that murky code is found out.

As we learned from financial fiascoes and what will be affirmed as Volkswagen’s software saga unwinds, rogues don’t exist in spite of top management oversight, they succeed because of top management oversight.

It can be comforting , in a way, to think that problems or bad decisions occur only because of individual stupidity or bias or error or ignorance. If people, and organizations as a whole don't even consciously see many problems coming, or ignore trade-offs, it's more disturbing and harder to solve. Most information and analysis will tend to reinforce their point of view. Single-minded mania also often produces short-run financial success.

Until the darkness comes. Leaders have to be held accountable for finding their blind spots. They can't claim ignorance after the fact.

 

2017-05-11T17:32:41+00:00 October 16, 2015|Current Events, Decisions, Human Error, Organizational Culture and Learning|