Home/Expertise

A Weakness in Artificial Intelligence

Here’s an interesting paradox. Expertise is mostly about better mental representation, say Ericsson and Pool in the book I was discussing earlier.

What sets expert performers apart from everyone else is the quality and quantity of their mental representations. Through years of practice, they develop highly complex and sophisticated representations of the various situations they are likely to encounter in their fields – such as the vast number of arrangements of chess pieces that can appear during games.  These representations allow them to make faster, more accurate decisions and respond more quickly and effectively in a given situation. This, more than anything else, explains the difference in performance between novices and experts.

This is an extension of Newell and Simon’s concept of chunks, which I talked  about before here.  (Ericsson worked with Simon at one point.)

Importantly, such representations are not at all the same as the parsimonious mathematical models beloved by economists, either. An expert may know tens of thousands of specific chunks in their field. But we still often know very little about exactly what that means, or how it works.

Now consider what this means for artificial intelligence.  AI mostly abandoned attempts to develop better mental representations decades ago.  Marvin Minsky of MIT advocated “frames” in 1974, but didn’t follow up with working systems.   To be sure, the 1980s era expert systems had many  simpler “if-then” type encoded rules, but faltered in practice. And today there is a great deal of attention to semantic networks to represent knowledge about the world. One example is DBpedia, which tries to present wikipedia in machine-readable form. A company called Cyc has been trying to hard-code information about the world into a semantic network for twenty years.

But semantic networks are flat and relatively uncomplicated. They are simple webs: graphs and links without much emergent structure.  As for vector models for things like machine language  translation, there’s almost no representation at all. It’s a brute force correlation, reliant on massive amounts of data. Recent advances in machine learning owe nothing to better representations. Chess programs effectively prune search trees efficiently, rather than use representations.

Meanwhile, the main data structures in  machine learning, like ndarrays in the Python Numpy scientific module, or Dataframes in the Pandas module which is commonly used by data scientists, are essentially glorified tables or spreadsheets with better indices. They are not sophisticated at all.

So the most important thing in expertise is something that researchers struggle to reproduce or understand, despite all the hype about machine learning.  Correlation is not the same thing as representation or framing or chunks. That’s not to say AI can’t make much more progress in this area. But there’s an enormous gap here.

Future advances are likely to come from better kinds of data structures.  There’s little sign of that so far.

2017-06-23T09:56:14+00:00 June 23, 2017|Expertise, Quants and Models|

Two very different kinds of expertise

Faith in experts has been faltering, as populists attack many established political and academic elites. So it is more important than ever to recognize genuine expertise. The trouble is that is often hard to do, despite how credentials are so important in most areas.

Many studies of expertise don’t help that much. Take Anders Ericcson’s new book, Peak: Secrets from the New Science of Expertise. Ericsson is best known for his interesting research into “deliberate practice.” Malcolm Gladwell popularized the idea by talking about “ten thousand hours”  necessary to pick up any advanced skill, but Ericsson is adamant that the kind of hours matter just as much as the amount. Experience alone is not going to improve your skills unless you push yourself outside your comfort zone, he says.  It’s all interesting stuff, and it’s worth reading.

The trouble is this approach applies almost entirely to fields where knowledge is cumulative, and where there are established teachers and teaching methods. You can practice similar situations over and over again. Examples include playing the piano or violin to a very advanced level,  and many games with set rules like chess, golf or soccer.

Most of the most important fields are not like this. As Ericsson notes in a brief aside,

What areas don’t qualify? Pretty much anything in which there is little or no direct competition, m such as gardening or other hobbies, and many of the jobs in today’s workplace – business manager, teacher, electrician, engineer, consultant and so on.  These are not areas where you are likely to find accumulated knowledge about deliberate practice, simply because there are no objective criteria for superior performance. (p98)

That excludes most of the main areas in which decision-making skill is required. But people have a tendency to ignore the boundary conditions for this kind of research, its limits of applicability.

There’s a deeper problem lurking here as well, if you think about it.  Those other fields are more difficult because the underlying factors that lead to success keep changing, partly because competition in them means people develop new techniques or approaches. What worked in selling computers in 1950 will not work so well now – but the skills required to be a top notch ballerina or violinist are almost the same. The rules of the game stay the same, even if competition grows more intense and training techniques alter over time.

The root of the issue is people generally confuse optimizing with adapting. They are not at all the same thing. Practicing the same skill over and over may optimize but it is much less likely to lead to adaptation.

Where does this leave us? Expertise in an existing field or game is not the same thing as dealing with changes in the rules of the game. That is a wholly different kind of problem.  Indeed, experts may be some of the last people to recognize that the rules are changing, as they have so much invested in existing interpretations. That explains why scientists rarely change their minds, but science slowly evolves without them.

Standard decision science, and most standard economics and economic policy, is about optimizing rather than adapting. But adaptation is necessary to success, and lack of adaptive skill is one reason why even the most credentialed experts often seem out of touch or wrong. To say so is not to deny expertise or science; instead it is to advocate a different kind of expertise and science.

There’s some other points to make, which I will come to shortly.

 

2017-06-22T15:12:35+00:00 June 22, 2017|Adaptation, Expertise|

How academics and practitioners think differently

Here is an excellent article at The American Interest on the differences between how policymakers and academics think about international relations in the US. Some of these differences carry very important implication for policy. In general, scholars have (not surprisingly) drifted away from practical concerns which limits their influence, author Hal Brands says.

International relations scholars—particularly political scientists—increasingly emphasize abstruse methodologies and write in impenetrable prose. The professionalization of the disciplines has pushed scholars to focus on filling trivial lacunae in the literature rather than on addressing real-world problems.

But practitioners and scholars also take very different positions on some substantive issues.  Practitioners are more concerned with American interests, while academics think more as “global citizens” or the stability of the system as a whole.  Interestingly, one particular point of difference is attitudes to credibility.

Since the early Cold War, U.S. policymakers have worried that if Washington fails to honor one commitment today, then adversaries and allies will doubt the sanctity of other commitments tomorrow. Such concerns have exerted a profound impact on U.S. policy; America fought major wars in Korea and Vietnam at least in part to avoid undermining the credibility of even more important guarantees in other parts of the globe. Conversely, most scholars argue credibility is a chimera; there is simply no observable connection between a country’s behavior in one crisis and what allies and adversaries expect it will do in the next.

This is clearly extremely important.  I have more sympathy with the scholars on this one: many of the worst policy errors have been caused by “domino theories” of credibility.

It is also interesting that there is a gap at all between practitioners and academics in foreign policy. In economic policy, the academics largely captured policy, certainly in the US, in the last two decades. That naturally carries with it a certain style of thinking – and the outcome has been anything but encouraging, with enormous financial crises and volatility.

2017-06-06T14:26:46+00:00 June 6, 2017|Decisions, Expertise, Foreign Policy|

How the most common prescription doesn’t cure bias

One of the main messages of most psychological research into bias, and the hundreds of  popular books that have followed,  is that most people just aren’t intuitively good at thinking statistically. We make mistakes about the influence of sample size, and about representativeness and likelihood. We fail to understand regression to the mean, and often make mistakes about causation.

However, suggesting statistical competence as a universal cure can lead to new sets of problems. An emphasis on statistical knowledge and tests can introduce its own blind spots, some of which can be devastating.

The discipline of psychology is itself going through a “crisis’ over reproducibility of results, as this Bloomberg view article from the other week discusses. One recent paper found that only 39 out of a sample of 100 psychological experiments could be replicated. That would be disastrous for the position of psychology as a science, as if results cannot be replicated by other teams their validity must be in doubt. The p-value test of statistical significance is overused as a marketing tool or way to get published. Naturally, there are some vigorous rebuttals in process.

It is, however, a problem for other disciplines as well, which suggests the issues are  genuine, deeper and more pervasive. John Ioannidis has been arguing the same about medical research for some time.

He’s what’s known as a meta-researcher, and he’s become one of the world’s foremost experts on the credibility of medical research. He and his team have shown, again and again, and in many different ways, that much of what biomedical researchers conclude in published studies—conclusions that doctors keep in mind when they prescribe antibiotics or blood-pressure medication, or when they advise us to consume more fiber or less meat, or when they recommend surgery for heart disease or back pain—is misleading, exaggerated, and often flat-out wrong. He charges that as much as 90 percent of the published medical information that doctors rely on is flawed.

The same applies to economics, where many of the most prominent academics apparently do not understand some of the statistical measures they use. A paper (admittedly from the 1990s) found that 70% of the empirical papers in the American Economic Review, the most prestigious journal in the field,” did not distinguish statistical significance from economic, policy, or scientific significance.”  The conclusion:

We would not assert that every econoimst misunderstands statistical significance, only that most do, and these some of the best economic scientists.

Of course, the problems and flaws in statistical models in the lead up to the great crash of 2008 are also multiple and by now famous. If bank management and traders do not understand the “black box” models they are using, and their limits, tears and pain are the usual result.

The takeaway is not to impugn statistics. It is that people are nonetheless very good at making a whole set of different mistakes when they tidy up one aspect of their approach. More statistical rigor can also mean more blind spots to other issues or considerations, and use of  technique in isolation from common sense.

The more technically proficient and rigorous you believe you are, often the more vulnerable you become to wishful thinking or blind spots. Technicians often have a remarkable ability to miss the forest for the trees, or twigs on the trees.

It also means there are (even more) grounds for mild skepticism about the value of many academic studies to practitioners.

2017-05-11T17:32:40+00:00 March 30, 2016|Big Data, Economics, Expertise, Psychology, Quants and Models|

“Everyone was Wrong”

From the New Yorker to FiveThirtyEight, outlets across the spectrum failed to grasp the Trump phenomenon.” – Politico

 

It’s the morning after Super Tuesday, when Trump “overwhelmed his GOP rivals“.

The most comprehensive losers (after Rubio) were media pundits and columnists, with their decades of experience and supposed ability to spot trends developing. And political reporters, with their primary sources and conversations with campaigns in late night bars. And statisticians with models predicting politics. And anyone in business or markets or diplomacy or politics who was naive enough to believe confident predictions from any of  the experts.

Politico notes how the journalistic eminences at the New Yorker and the Atlantic got it wrong over the last year.

But so did the quantitative people.

Those two mandarins weren’t alone in dismissing Trump’s chances. Washington Post blogger Chris Cillizza wrote in July that “Donald Trump is not going to be the Republican presidential nominee in 2016.” And numbers guru Nate Silver told readers as recently as November to “stop freaking out” about Trump’s poll numbers.

Of course it’s all too easy to spot mistaken predictions after the fact. But the same pattern has been arising after just about every big event in recent years. People make overconfident predictions, based on expertise, or primary sources, or big data, and often wishful thinking about what they want to see happen. They project an insidery air of secret confidences or confessions from the campaigns. Or disinterested quantitative rigor.

Then  they mostly go horribly wrong. Maybe one or two through sheer luck get it right – and then get it even more wrong the next time. Predictions may work temporarily so long as nothing unexpected happens or nothing changes in any substantive way. But that means the forecasts turn out to be worthless just when you need them most.

The point? You remember the old quote (allegedly from Einstein) defining insanity: repeating the same thing over and over and expecting a different result.

Markets and business and political systems are too complex to predict. That means a different strategy is needed. But instead  there are immense pressures to keep doing the same things which don’t work in media, and markets, and business. Over and over and over again.

So recognize and understand the pressures. And get around them. Use them to your advantage. Don’t be naive.

 

2017-05-11T17:32:40+00:00 March 2, 2016|Adaptation, Expertise, Forecasting, Politics, Quants and Models|

Goldman gets it wrong

One interesting thing about markets (and politics and foreign policy) is that predictions from gurus and self-important experts have a remarkable tendency to turn out wrong. Take this Bloomberg report:

Goldman Sachs to clients: whoops. Just six weeks into 2016, the New York-based bank has abandoned five of six recommended top trades for the year.

The dollar versus a basket of euro and yen; yields on Italian bonds versus their German counterparts; U.S. inflation expectations: Goldman Sachs Group Inc. was wrong on all that and more.

In fact, the best investors tend not to show significantly better predictive ability than anyone else either.  The brutal truth is no-one is reliably good at predicting short-term market movements, although smarter players are more alert to some of the potential mistakes in their perspective.

Instead, they’re mainly better at changing their mind quickly – cutting their losses and getting out of positions that move against them. They manage their risks and survive to fight another day.  They figure out when they are wrong faster.

They don’t make huge overconfident predictions. Instead, they watch their exposure, as Nicholas Nassim Taleb argues.

Of course, it’s unlikely that Goldman was putting its own money behind these predictions to the bitter end. The successful traders at investment banks historically don’t pay much attention to their own economists and research in any case. As they see it, the research is really for the pension fund managers in Cleveland or Edinburgh. And press reporting of market swings is the lowest “dumb money” tier of all – something to bear in mind when the markets are scary like today.

2017-05-11T17:32:40+00:00 February 11, 2016|Current Events, Expertise, Market Behavior|

“And no one saw it coming.” Again. And again.

Peggy Noonan, writing today about the state of US GOP primary race:

But really, what a year. Nobody, not the most sophisticated expert watching politics up close all his life, knew or knows what’s going to happen. Does it go to the convention? Do Mr. Trump’s roarers turn out? Does he change history?

And no one saw it coming.

But the press and tv  and political and economic research firms will drown you in speculation and commentary and confident predictions. That’s yet another reason to distrust them, as I keep arguing. Instead, look for leverage, and resilience.  Don’t get locked into a convenient narrative. It’s what you can do to change your own thinking and position that counts.

2015-12-18T12:38:11+00:00 December 18, 2015|Assumptions, Expertise, Forecasting, Politics|

The sun doesn’t go round the earth, after Paris

Evidence should be a fundamental part of any discussion of what to do in the wake of the Paris bombings, I said yesterday.  Do you agree with that? Instead we most often make assumptions about “what the terrorists want” or discuss things on such an abstract level (“they hate freedom”) that there’s little link to reality at all.

The trouble goes much deeper, though, because even when people use evidence (which is something to be grateful for), they cherry-pick it. It’s riddled with confirmation bias, and it mostly doesn’t prove anything at all.

Remember this in reading all the op-eds from experts on terrorism and the Middle East you’ll see in the next few weeks:  the success of expert predictions in this area is about as good as dart-throwing chimps.  Predictions from the most learned Syria and ISIS and intelligence experts are likely to be useless, just like most economic and political predictions. People can know almost everything about the issue – and still get things completely wrong.

If gathering information and evidence alone clearly isn’t enough, what do we do?

Here’s the further essential thing to grapple with; the most likely explanation or hypothesis is not the one with the most information lined up for it. It’s the one with the least information against it.

That rule is taken from Richards Heuer’s Psychology of Intelligence Analysis, and lies at the root of his method of Analysis of Competing Hypotheses

The root of the problem is that most information can be consistent with a whole variety of explanations. You can integrate it into a number of completely different and satisfying and incompatible stories. That means the most genuinely useful information is diagnostic; that is only consistent with one or a few explanations. It helps you choose between different explanations.

Think of it this way. There was plenty of seemingly obvious evidence for thousands of years that the sun went round the earth.  The fact that the sun rises and sets could be read to be consistent with the either the sun or the earth at the center of the solar system. So that evidence doesn’t help very much. You need to find evidence that can’t be read in favor of both. (That’s another story).

But that essential diagnostic information can be surprisingly difficult to find, especially because people rush to find facts that fit with their existing views.

What happens instead is the more information people gather, the more (over)confident they become of their point of view, regardless of the validity or reliability of the information.  They don’t think about the information. They just more or less weigh the total amount of it.

So what is needed instead is a kind of disconfirmation strategy.

Hold on, you might say. Doesn’t this mean we have to stop and think for a moment before jumping to our favorite recommendation? And isn’t that a pain which we’d rather avoid? Isn’t that uncomfortable?  Isn’t this a little austere and unglamorous compared with colorful and vivid stories and breathless reporting?

Yes. Repeat: Yes. All the information and opinion and sourcing and satellite photography in the world doesn’t help you if you ignore disconfirmation. It’s a lot less painful than wasting billions of dollars and potentially thousands of lives, and failing.  There’s some very practical ways to do it, too.

 

 

2017-05-11T17:32:41+00:00 November 16, 2015|Assumptions, Confirmation bias, Expertise, Security|

One way to tell you’re getting into trouble

People have a tendency to think highly confident calls are a sign of expertise and credibility. More often, it’s a sign of ignorance and naiveté. That’s one of the lessons of scientific method.

Whenever a theory appears to you as the only possible one, take this as a sign you have neither understood the theory nor the problem which it was intended to solve.

Karl Popper, Objective Knowledge: An Evolutionary Approach

The trick is not to take confidence (or prominence) as a sign of credibility, or fall in love with a particular narrative. Instead, find a way to test assumptions and find the boundary conditions.

2015-02-09T11:45:12+00:00 February 13, 2015|Assumptions, Decisions, Expertise, Human Error|

The Right Level of Detachment

One of the keys to why decisions fail is that we often have a tendency to look for universal, across-the-board rules for what to do. Yet success most often comes from maintaining the right kind of balance between different rules or requirements. Remember, it’s usually the assumptions people make which are likely to cause them to lose money or fail.

In fact, one of the best ways to look for those blind spots and hidden assumptions is to look for the balances that have been ignored.

One of the most important of these is the right balance between looking at a situation in general and specific terms.

Take the last post about medical errors. A physician or surgeon has to be alert to the immediate details of a particular patient. You can’t ignore the specifics. You can’t assume distinctive individual characteristics are infernal from the general.

This happens a lot in applied or professional situations. Good reporters or detectives or  short-term traders also often tend to focus on “just the facts, ma’am”, and get impatient with anything that sounds abstract. Their eyes glaze over at anything which is not immediate and tangible. Foreign exchange traders have traditionally often known next to nothing about the actual underlying countries or economics in a currency pair, but are very attuned to short-term market conditions and sentiment.

But the essence of expertise is also being able to recognize more general patterns. The most seminal investigation of expertise was Chase and Simon’s description of “chunking” in 1973. They investigated chess grand masters, and found that they organized the information about pieces on the board into broader, larger, more abstract units. Years of experience gave grand masters a larger “vocabulary” of such patterns, which they quickly recognized and used effectively. More recent work also finds that

Experts see and represent a problem in their domain at a deeper (more principled) level than novices; novices tend to represent the problem at a superficial level. Glaser & Chi, quoted p.50, The Cambridge Handbook of Expertise and Expert Performance

Indeed, one of the biggest reasons experts fail is because they fail to use that more general knowledge: they get too close-up and engrossed in the particular details. They show off their knowledge of irrelevant detail and “miss the forest for the trees.” They tend to believe This Time Is Different.

This is why simple linear models most often outperform experts, because they weigh information in a consistent way without getting caught up in the specifics. It is also why taking what Kahneman calls the “outside view” and base rates are essential in most situations, including project management.  You have to be able to step back and think about what generally happens and that requires skill in perceiving similarities and analogies and patterns.

People’s mindsets often tend to push them to one extreme or the other. Too little abstraction and chunking? You get lost in specific facts and noise and don’t recognize patterns. You don’t represent the problem accurately, if you see it at all.  You get too close, and too emotive, and too involved. On the other hand, too much detachment, and you lose “feel” and color and tacit knowledge. You become the remote head office which knows nothing about front line conditions, or the academic who is committed to the theory rather than the evidence. You lose the ability to perceive contrary information, or see risks that do not fit within your theory.

You need the right balance of general and specific knowledge. But maintaining that kind of balance is incredibly hard, because people and organizations tend to get carried away in one direction or the other. Looking for signs of that can tell you what kind of mistaken assumptions or blind spots are most likely. Yet people hardly ever monitor that.

 

2017-05-11T17:32:42+00:00 December 15, 2014|Assumptions, Base Rates, Decisions, Expertise, Outside View|