People pay too much attention to their forecasts (which are unreliable) and too little to their assumptions, and that often gets them into serious trouble. I argued in the last post that the assumption driving much EU integration – that international law and international organization is the foundation of the last seventy years of peace in Europe – is not always true.
So what else may have kept the peace in Europe for the last seventy years? What worked, if international law sometimes doesn't work? Think for a moment.
It isn't the same as the question of what you think international law is ideal or moral aspiration or a nice idea, but, again, what actually works. We all know people who are wonderfully nice, but maybe should not be entrusted with arranging your summer trip, or running a company, or handling air traffic control for inbound flights at LaGuardia. You may think it is ideal and moral that everyone should be honest as well. But you probably locked your front door when you left home this morning too. So what actually kept the peace, if not the EU?
Might it have something to do with the US deploying hundreds of thousands of troops in Europe, a chain of air bases from Keflavik in Iceland to Incirlik in Turkey, and the Sixth Fleet in the Mediterranean? Not to mention the threat of thermonuclear escalation if anyone started a war. The US assumed much of the security of Europe, and strongly supported European rebuilding from the Marshall plan onwards, as well as the EU itself as a bulwark against communism. The Red Army might have been entirely unthreatening and peaceful and admired European law, but the citizens of Budapest and Prague who saw Soviet tanks on their streets in 1956 and 1967 might disagree. Yet western European countries could afford to reduce defense spending and focus on welfare and economics. In other words, the EU itself is more a symptom of the US stabilizing the security situation than the cause of security.
Let's say you splutter with outrage at the idea. There are definitely some people in Europe and elsewhere who are very uncomfortable with any positive consequence of Ameican foreign policy, ever. Fine. How would you test that? What kind of implications would you expect to see? The explanations lead very different places and feed different narratives. Seeing the question from different angles and questioning assumptions is usually essential to figuring out the right policy. And the things you feel uncomfortable about are the most likely place for blind spots, because you never look there.
In the same way, the reaction to the publication of the Chilcot report on British participation in the Iraq war was published yesterday. Most of the attention, like this Guardian editorial, is focused on poor prediction of consequences.
Let's agree the war was bad in retrospect. It is also clear that there was not enough effort to question the assumptions underlying intelligence assessments that Saddam Hussein still had weapons of mass destruction, or prepare for the aftermath.
But the press reaction doesn't really come to grips with a recurrent theme in the executive summary of the report. Why did Blair, a European multilateral liberal, stick so close to Bush, a Texan Republican? Was it to preserve the special relationship? Get invited to delightful Crawford, TX? Be a poodle and get dog biscuits?
Most media reactions lean towards thinking it was because Blair was a pathological liar, a vain foolish potential war criminal who ignored advice. They personalize the issue. But Blair was a highly skilled, highly popular leader before the war, not a cartoon villain, and he clearly had doubts about direct UK interests in Iraq. So what was he thinking?
In fact, Chilcot documents how Blair kept trying to push the US to go the multilateral route, to get UN resolutions, to persuade a coalition of allies rather than take unilateral action.
The report references a 2003 speech by Blair several times.
370. In Mr Blair’s view, the decision to stand alongside the US was in the UK’s long‑term national interests. In his speech of 18 March 2003, he argued that the handling of Iraq would:
“… determine the way in which Britain and the world confront the central security threat of the 21st century, the development of the United Nations, the relationship between Europe and the United States, the relations within the European Union and the way in which the United States engages with the rest of the world. So it could hardly be more important. It will determine the pattern of international politics for the next generation.”
In other words, it wasn't really about imminent threats from Iraq or whether it had WMD or supported terrorism for Blair. At best, those were fig leaves or PR concerns. It wasn't even primarily about the effect of disagreement on US-UK relations. It was to get the Americans to follow the norms of international law. It was to stop them acting outside the multilateral framework.
So consider this: international law didn't stop the Iraq war, because the Americans felt they couldn't rely on the UN framework. And Blair, as an internationalist progressive, went along to try to make sure the “pattern of international politics for the next generation” was based on international law and multilateral organizations. He tried to rein back American unilateral use of force by participating as a junior partner, to preserve international norms, albeit not enough for domestic opponents or some other EU governments.
So international law did not lead to peace but was the cause of at least UK participation in the Iraq war. Uncomfortable? Fine. But Blair might have stumbled into huge mistakes because of his assumptions. Forecasts and data and judgements got altered to fit them.
And that happens all the time.
There’s another problem. Major conflicts are often not about particular immediate interests or ideology, but about legitimacy, the acceptability of the ground rules of the game. Europe tore itself apart repeatedly for centuries over things like the divine right of kings or national “aspirations”. Indeed, it’s possible to tell the story of “epochal wars” over the last six centuries as fundamentally about issues of what the nature of the state and the grounds of its legitimacy ought to be, and what state and people owe each other. It also turns out the resolution of one problem or challenge tends to lay the seeds of the next major conflict.
Evidence should be a fundamental part of any discussion of what to do in the wake of the Paris bombings, I said yesterday. Do you agree with that? Instead we most often make assumptions about “what the terrorists want” or discuss things on such an abstract level (“they hate freedom”) that there’s little link to reality at all.
The trouble goes much deeper, though, because even when people use evidence (which is something to be grateful for), they cherry-pick it. It’s riddled with confirmation bias, and it mostly doesn’t prove anything at all.
Remember this in reading all the op-eds from experts on terrorism and the Middle East you’ll see in the next few weeks: the success of expert predictions in this area is about as good as dart-throwing chimps. Predictions from the most learned Syria and ISIS and intelligence experts are likely to be useless, just like most economic and political predictions. People can know almost everything about the issue – and still get things completely wrong.
If gathering information and evidence alone clearly isn’t enough, what do we do?
Here’s the further essential thing to grapple with; the most likely explanation or hypothesis is not the one with the most information lined up for it. It’s the one with the least information against it.
That rule is taken from Richards Heuer’s Psychology of Intelligence Analysis, and lies at the root of his method of Analysis of Competing Hypotheses.
The root of the problem is that most information can be consistent with a whole variety of explanations. You can integrate it into a number of completely different and satisfying and incompatible stories. That means the most genuinely useful information is diagnostic; that is only consistent with one or a few explanations. It helps you choose between different explanations.
Think of it this way. There was plenty of seemingly obvious evidence for thousands of years that the sun went round the earth. The fact that the sun rises and sets could be read to be consistent with the either the sun or the earth at the center of the solar system. So that evidence doesn’t help very much. You need to find evidence that can’t be read in favor of both. (That’s another story).
But that essential diagnostic information can be surprisingly difficult to find, especially because people rush to find facts that fit with their existing views.
What happens instead is the more information people gather, the more (over)confident they become of their point of view, regardless of the validity or reliability of the information. They don’t think about the information. They just more or less weigh the total amount of it.
So what is needed instead is a kind of disconfirmation strategy.
Hold on, you might say. Doesn’t this mean we have to stop and think for a moment before jumping to our favorite recommendation? And isn’t that a pain which we’d rather avoid? Isn’t that uncomfortable? Isn’t this a little austere and unglamorous compared with colorful and vivid stories and breathless reporting?
Yes. Repeat: Yes. All the information and opinion and sourcing and satellite photography in the world doesn’t help you if you ignore disconfirmation. It’s a lot less painful than wasting billions of dollars and potentially thousands of lives, and failing. There’s some very practical ways to do it, too.
The midterm elections today will likely just produce the usual cyclical swing against the party in power. The national debate has been particularly arid this year, largely focused on targeted messages to mobilize the base instead of changing people’s minds.
But much of the difference between people, and points of view, is not about the direct or immediate effects of particular policies, anyway. It’s not about immediate facts, or even always about immediate interests. According to Robert Jervis, in System Effects: Complexity in Political and Social Life,
At the heart of many arguments lie differing beliefs – or intuitions- about the feedbacks that are operating. (my bold)
It’s because, as saw before, most people find it very hard to think in systems terms. Politicians are aware of indirect effects, to be sure, and often present that awareness as subtlety or nuance. But they usually seize on one particular story about one particular indirect feedback loop, instead of recognizing that in any complex system there are multiple positive and negative loops. Some of those loops improve a situation. Some make it worse, or offset other effects. Feedback effects operate on different timescales and different channels. Any particular decision is likely to have both positive and negative effects.
The question is not whether one particular story is plausible, but how you net it all together.
Take the example of Ebola again. The core of the administration case was that instituting stricter visa controls or quarantine in the US might have the indirect effect of making it harder to send personnel and supplies to Africa, and containing the disease in Africa was essential.
That is likely true. It is story which seems coherent and plausible. But there is generally no attempt to identify, let alone quantify or measure other loops which might operate as well, including ones with a longer lag time. Airlines may stop flying to West Africa in any case if their crew fear infection, for example. Reducing the chance of transmission outside West Africa might enable greater focus of resources or experienced personnel on the region. More mistakes in handling US cases (as apparently happened in Dallas) might significantly undermine public trust in medical and political authorities. You can imagine many other potential indirect effects.
The underlying point is this: simply identifying one narrative, one loop is usually incomplete.
Here’s another example, at the expense of conservatives this time. Much US foreign policy debate effectively revolves around “domino theory”, and infamously so in the case of the Vietnam war. The argument from hawks in the 1960s was that if South Vietnam fell, other developing countries would also fall like dominoes. So even though Vietnam was in itself of little or no strategic interest or value to the United States, it was nonetheless essential to fight communism in jungles near the Laos border – or before long one would be fighting communism in San Francisco. Jervis again:
More central to international politics is the dispute between balance of power and the domino theory: whether (or under what conditions) states balance against a threat rather than climbing on the bandwagon of the stronger and growing side.
You can tell a story either way: a narrative about positive feedback (one victory propels your enemy to even more aggression) or balancing feedback (enemies become overconfident and overstretch, provoke coalitions against them, alienate allies and supporters, or if we act forcefully it will produce rage and desperation and become a “recruiting agent for terrorism”.)
The same applies to the current state of the Middle East, where I have a lively debate going with some conservative friends who believe that the US should commit massive ground forces to contain ISIS in the Middle East, or “small wars will turn into big wars.” It’s in essence a belief that positive feedback will dominate negative/balancing feedback, domino-style.
But you can’t just assume such a narrative will play out in reality. South Vietnam did fall, after all But what happened was that the Soviet Union ended up overreaching in adventures like the invasion of Afghanistan. The other side collapsed.
The lure of a particular narrative, of focusing on one loop in a system, is almost overwhelming for many people, however. It’s related to the tendency to seize on one obvious alternative in decisions, with limited or no search for better or more complete or relevant alternatives.
The answer is not to just cherry pick particular narratives about feedback loops and indirect effects which happen to correspond with your prior preferences. That usually turns into wishful thinking and confirmation bias. Instead, you need to get a feel for the system as a whole, and have a way to observe and measure and test all (or most of ) the loops in operation.
Isn’t it strange how emotive and ethically high-strung the debate about Ebola has become? Much of the press is flinging accusations about “hysteria” and quarantine rules have led to vicious partisan exchanges. I think it’s better to step back here and ask why epidemiology should have become such a moralized partisan issue. There’s some obvious blind spots here.
Liberals are enraged at the thought of quarantine and travel restrictions, while conservatives have been much quicker to embrace them. Why? I think it is because of the central importance of the notion of “fairness” in politics. According to Jonathan Haidt’s fascinating research, people are sensitive to different moral considerations in much the way they have different taste buds in the tongue, like sweet or salty. Haidt identifies five (later six) moral taste buds. Liberals perceive issues almost entirely in terms of just two: care-harm and fairness-equality. Conservatives are receptive to those moral “tastes” but also pick up other values – authority, loyalty, and sanctity, which are more adapted to group cohesion. In fact, most people in most global cultures perceive the wider spectrum of moral considerations, perhaps because they have had adaptive value in traditional societies over long spans of time.
This is from an NYT review of Haidt’s research, but you should read his whole book,The Righteous Mind: Why Good People Are Divided by Politics and Religion.
To the question many people ask about politics — Why doesn’t the other side listen to reason? — Haidt replies: We were never designed to listen to reason. When you ask people moral questions, time their responses and scan their brains, their answers and brain activation patterns indicate that they reach conclusions quickly and produce reasons later only to justify what they’ve decided.
Think about what this means for how people make and anticipate policy decisions. Both sides of the partisan divide tend to talk past each other.
Haidt started out as very liberal, but experiences such as living in India persuaded him that different cultures and people saw things in different ways.
The hardest part, Haidt finds, is getting liberals to open their minds. Anecdotally, he reports that when he talks about authority, loyalty and sanctity, many people in the audience spurn these ideas as the seeds of racism, sexism and homophobia. And in a survey of 2,000 Americans, Haidt found that self-described liberals, especially those who called themselves “very liberal,” were worse at predicting the moral judgments of moderates and conservatives than moderates and conservatives were at predicting the moral judgments of liberals. Liberals don’t understand conservative values. And they can’t recognize this failing, because they’re so convinced of their rationality, open-mindedness and enlightenment.
Haidt isn’t just scolding liberals, however. He sees the left and right as yin and yang, each contributing insights to which the other should listen.
So what has this to do with Ebola? The issue could almost be designed to cleave along this moral perception fracture. Liberals perceive quarantine or restrictions on returning medical personnel or West African visa applicants as highly unfair to the individuals concerned. They are not as receptive to considerations of protecting a particular country or territory from the virus, which is the main focal point for conservatives. Furthermore, people in general of all persuasions have a hard-time perceiving or acknowledging trade-offs between different values and objectives. In practice, liberals are unwilling to trade even a small amount of fairness for other values, because they believe they don’t have to make a choice. Hence loosening quarantine restrictions on returning healthcare workers is assumed not to make a disease outbreak in the US more likely, because fair must also be effective. That is a big assumption.
There’s other problems here I’ll come to , including the nature of expertise and the problem of low-probability high-impact risks. Conservatives have other problems I’ll return to.
But suppose you’re a liberal reading this. Do you have to change your view or concede the other side is right? No. Believe what you want, as ardently as you want, and you can think the other side is dumb. But here’s the real point. If you have to make actual decisions, instead of just rhetorical positions, then whatever your partisan convictions, you can’t expect your particular viewpoint is going to be right every single time. No-one is made with an automatic hotline to god-like omnipotent truth. So set up a few markers for yourself that help tell you when you should reexamine the evidence or change your mind. Just for yourself, have few guardrails that help you recognize contrary evidence when it doesn’t fit in with your natural instincts or assumptions.
It’s because people tend to instinctively perceive ethical choices in certain ways and then invent reasons to justify their choice that this kind of quasi ethical fight about public policy can get so hard to solve – and hugely dangerous assumptions can get overlooked. So long as you have something to lose if you’re wrong, it helps to understand where the other side is coming from.
Companies and investment firms go extinct when they fail to understand the key problems they face. And right now, the fundamental nature of the problem many corporations and investors and policymakers face has changed. But mindsets have not caught up.
Ironically, current enthusiasms like big data can compound the problem. Big data, as I’ve argued before, is highly valuable for tacking some kinds of problems, when you have very large amounts of data of essentially similar, replicable events. Simple algorithms and linear models also beat almost every expert in many situations, largely because they are more consistent.
The trouble is many of the most advanced problems are qualitatively different. Here’s an argument by Gregory Treverton, who argues there is a fundamental difference between ‘puzzles’ and ‘mysteries.’
There’s a reason millions of people try to solve crossword puzzles each day. Amid the well-ordered combat between a puzzler’s mind and the blank boxes waiting to be filled, there is satisfaction along with frustration. Even when you can’t find the right answer, you know it exists. Puzzles can be solved; they have answers.
But a mystery offers no such comfort. It poses a question that has no definitive answer because the answer is contingent; it depends on a future interaction of many factors, known and unknown. A mystery cannot be answered; it can only be framed, by identifying the critical factors and applying some sense of how they have interacted in the past and might interact in the future. A mystery is an attempt to define ambiguities.
Puzzles may be more satisfying, but the world increasingly offers us mysteries. Treating them as puzzles is like trying to solve the unsolvable — an impossible challenge. But approaching them as mysteries may make us more comfortable with the uncertainties of our age.
Here’s the interesting thing: Treverton is former Head of the Intelligence Policy Center at RAND, the fabled national security-oriented think tank based in Santa Monica, CA, and before that Vice Chair of the National Intelligence Council. RAND was also arguably the richly funded original home of the movement to inject mathematical and quantitative rigor into economics and social science, as well as one of the citadels of actual “rocket science” and operations research after WW2. RAND stands for hard-headed rigor, and equally hard-headed national security thinking.
So to find RAND arguing for the limits of “puzzle-solving” is a little like finding the Pope advocating Buddhism.
The intelligence community was focused on puzzles during the Cold War, Treverton says. But current challenges fall into the mystery category.
Puzzle-solving is frustrated by a lack of information. Given Washington’s need to find out how many warheads Moscow’s missiles carried, the United States spent billions of dollars on satellites and other data-collection systems. But puzzles are relatively stable. If a critical piece is missing one day, it usually remains valuable the next.
By contrast, mysteries often grow out of too much information. Until the 9/11 hijackers actually boarded their airplanes, their plan was a mystery, the clues to which were buried in too much “noise” — too many threat scenarios.
The same applies to financial market and business decisions. We have too much information. Attention and sensitivity to evidence are now the prime challenge facing many decision-makers. Indeed, that has always been the source of the biggest failures in national policy and intelligence.
It’s partly a consequence of the success of many analytical techniques and information gathering exercises. The easy puzzles, the ones susceptible to more information and linear models and algorithms , have been solved, or at least automated. That means it’s the mysteries and how you approach them that move the needle on performance.
Russia is tightening its grip on Crimea. So Obama’s credibility and American foreign policy in general is being completely undermined, if you listen to the increasing chorus of criticism (like this.)
In this case I don’t agree – or at least, to it is not all Obama’s fault.
The first problem is capability matters as well as intentions. Obama might have made overoptimistic mistakes. But the biggest problem is American voters have tired of foreign intervention, after thousands of casualties and trillions of dollars spent in Iraq and Afghanistan. That history can’t simply be forgotten or washed away rapidly. To put it in economic terms, there is hysteresis in this foreign policy system. It is bent out of shape, rather returning to a previous equilibrium. You can’t put the toothpaste back in the tube.
Some Republicans I speak ferociously condemn the President. But it is very unlikely that Obama could have won a Congressional vote on intervention in Syria. Conservative leader David Cameron suffered a shock defeat in the UK Parliament on the matter, for example.
Any President would face deep skepticism from voters about deploying American forces abroad or similar forceful action at present. You could put Attila the Hun in the White House right now and he would have difficulty taking aggressive, “credible” action. It took Pearl Harbor for FDR himself to persuade America to enter World War 2. And no one remembers FDR as weak or vacillating.
It will take time – maybe decades – or another 9/11 or Pearl Harbor style attack to convince the American public to commit to foreign intervention in large scale again. Obama has to live with that legacy for now.
As I’ve said before, one primary blind spot afflicting decisions is the relative influence of the person and the situation. One of the most common findings in social psychology is people – and this means everyone, Democrat, Republican, American, Russian – tend to overstress the influence of personal attributes and qualities (“the President is weak”) in other people’s decisions , and pay far too little attention to situational factors. (“Congress will not vote for any strong response.”) Incidentally, we naturally do the opposite with our own decisions. It’s not our fault, it was the circumstances.
We’re also seeing the tendency of Western media to turn most issues into a horse race – who is up and who is down (Putin up, Obama down.) to the exclusion of most other angles, because it makes a good domestic story.
Another problem is people tend to use – and think about – “credibility” very loosely. In practice credibility varies with context. Take the doctrine of nuclear deterrence during the Cold War. Some of the chilliest of cold warriors, like Hermann Kahn, in thinking through the likelihood of nuclear war, developed the notion of escalation dominance. As a crisis developed, one side or the other could have the advantage on the ladder of escalation. Soviet forces had overwhelming conventional dominance in Central Europe. No-one pretended otherwise. Hence the importance to US credibility of the presence of tactical nuclear weapons at the next stage of escalation.
In a similar way, there is no reason to think that US weakness on the borders of Crimea signals US weakness about the borders of Poland or California. The US may have complete credibility on some levels of escalation or different contexts, and none at all on others. That is normal. The advantage may shift at different levels of seriousness in a crisis.
If Obama can be faulted, it is in blurring the kind of US national interests which would justify a strong response. This is where universalistic notions of human rights or international legal norms cause problems. It may often be in the US interest to pay lip service to international norms, but much less so to expend blood and treasure on someone else’s behalf to defend particular violations. (Bill Clinton did not intervene in Rwanda. Lyndon Johnson did not intervene in Biafra.)
It simply invites situations where words and actions diverge radically, and thus causing people to doubt your words.
Liberals in particular want to defend international legal norms verbally, because that is what makes them norms. But there can be a temptation to overreach. The American public is more likely to want to focus on tangible immediate interests and the proportionate cost and benefit to the United States itself, rather than universal legal principle. Presidents have a little room to use the “bully pulpit” to try to persuade voters that important interests are at stake. But there are limits to what speeches alone can do.
That points to a need to contain the shrillness of American rhetoric when we are not actually going to do anything much. It may not feel good. But it may preserve the credibility of words for those times when a shooting war is a genuine risk. One of the prime contributions to credibility is to pick your battles, rather than let the battles pick you.
I’ve been writing about Ukraine in reports. The most frequent and damaging mistake people make in crises is leaping to conclusions based on basic facts, without making any attempt to define or frame the underlying problem. Political leaders want three options on their desk in two hours, without figuring out what they want to or can achieve first.
Not surprisingly, if you’re barely aware of the problem or take it for granted, you’re going to get nasty surprises. It becomes the most prevalent blind spot: people see what they want to see. It leads to produces confirmation bias, selective interpretation of evidence, and damaging surprises.
The Western political response to the Ukraine so far is largely to define it almost entirely as a problem of abstract principle: territorial integrity and inviolability of borders. The key issue here is whether the Western “territorial integrity” and international law frame gains traction and support from other parties (cf Iraq invasion of Kuwait, 1990).
I doubt it. I think that international law approach will falter, as there is little or no appetite to make this a test case.
In the end Russian actions in the Ukraine will be (unwillingly) accepted internationally, subject to signs or undertakings that no precedent is being set. Defending international norms is too expensive in this case. The West will do just enough to suggest international legal norms remain symbolically important in theory, if not actually in practice.
Ways will eventually be found to call it a one-off, or anomaly, or exception, with limited implications which do not require a full Western response. This time.
One of the most important traps that afflicts decision makers is a failure to generate enough alternatives. People often see things in purely binary terms – do X or don’t do X – and ignore other options which may solve the problem much better. They fail to look for alternative perspectives.
This is one kind of knock-on effect from the tendency of policymakers to ignore trade-offs that I mentioned in this post on intelligence failure last week, To continue the point, one consequence of ignoring trade-offs is leaders frequently fail to develop any fallback options as well. And that can lead to trillion-dollar catastrophes.
The same factors that lead decision makers to underestimate trade-offs make them reluctant to develop fallback plans and to resist information that their policy is failing. The latter more than the former causes conflicts with intelligence, although the two are closely linked. There are several reasons why leaders are reluctant to develop fallback plans. It is hard enough to develop one policy, and the burden of thinking through a second is often simply too great. Probably more important, if others learn of the existence of Plan B, they may give less support to Plan A. .. The most obvious and consequential recent case of a lack of Plan B is Iraq. (from Why Intelligence Fails)
The need to sell a chosen option often blinds people to alternatives, and develops a life of its own. Policy choices pick up their own inertia and get steadily harder to change.
Leaders tend to stay with their first choice for as long as possible. Lord Salisbury, the famous British statesman of the end of the nineteenth century, noted that “the commonest error in politics is sticking to the carcasses of dead policies.” Leaders are heavily invested in their policies. To change their basic objectives will be to incur very high costs, including, in some cases, losing their offices if not their lives. Indeed the resistance to seeing that a policy is failing is roughly proportional to the costs that are expected if it does.
Decisions problems are pervasive, and you can’t really make sense of events unless you are alert to them.
Let’s just be glad that Snowden hasn’t leaked the launch codes for US Trident submarines yet. But in the meantime, the continuing revelations about bugging foreign leaders is causing severe embarassment to the US and to Obama personally.
No-one should be naive enough to think states don’t use intelligence methods, and the US has dazzling technical capabilities. Still, it is humiliating and embarrassing for foreign leaders to look vulnerable in front of their own domestic public, and deep diplomatic damage is being done.
Is it worth it? I’ve talked before about the difficulties of making sense of intelligence. Despite immense resources and satellite technology, and billions of dollars of spending every year, the US intelligence community has nonetheless missed most of the major turning points in the last seventy years. The problem is even if you have very valuable intercepts you still have to understand and act on intelligence.
That’s where you run into the problems and blind spots like confirmation bias and framing on which I focus intensively.
In fact, even in the midst of actual war, it can be hard to make use of intelligence. The US had broken the Japanese codes before Pearl Harbor, but still saw half of its Pacific fleet sunk. Information alone is not enough.
I naturally don’t have access to current intelligence. However, we do have a relatively complete picture of earlier periods. The renowned British military historian John Keegan examined the net value of intelligence in Intelligence in War: Knowledge of the Enemy from Napoleon to Al-Qaeda.
Intelligence is never sufficient, he says, even operational intelligence on enemy deployments. There is little doubt that the Enigma intercepts of German U-boat positions in the Atlantic helped win the war, for example. But just as significant was changes in convoy arrangements and more availability of escorts. According to Keegan,
..however good the intelligence available before an encounter may appear to be, the outcome, given equality of force, will still be decided in a fight; and in a fight, determination, again given equality of force, will be the paramount factor.
The Allies had perfect foreknowledge of the German invasion of Crete in May 1941- and still managed to lose because of the determination of the German invaders. The glamor of secrecy and subterfuge often makes little practical difference on the ground.
Even the greatest US intelligence triumph of all was a near-run thing. US naval intelligence cunningly pinpointed the location of the decisive Japanese strike by sending a false message about a water shortage on Midway Island in June 1942. Nonetheless, five squadrons of American aircraft were destroyed by the Japanese naval forces at Midway. Only sheer luck let a lost sixth dive-bomber squadron, at the limit of its fuel endurance, follow the wake of a Japanese destroyer to the main Japanese carriers and sink them.
Though Midway turned out to be a great American victory, in the making of which the intercept and decryption services played an essential part, it might have been exactly the opposite: a great American defeat, into which the US Pacific fleet had been drawn by the very success of its own intelligence operation.
People still have to make sense of and act on decrypts, and in most cases this is still extraordinarily hard. Sometimes they can tempt you into overconfidence, or neglecting other more practical measures. Most of the NSA’s data sits unread.
America’s enemies know we have remarkable electronic surveillance capabilities; but they count on lack of US political determination to see fights through.