The right kind of excitement about AI

We are currently at peak excitement about Artificial Intelligence (AI), machine learning and data science. Can the new techniques fix the rotted state of Economics, with its baroque models that failed during the financial crisis? Some people suggest that the economics profession needs to step aside in favor of those are actually have real math skills.  Maybe Artificial Intelligence can finally fix economic policy, and we can replace the Federal Reserve (and most of financial markets) with a black box or three?


It’s not that easy, unfortunately. There is a hype cycle to these things. Hopes soar (as expectations for AI did as far back as the late 1960s.) Inflated early expectations get dashed. There is a trough of disillusionment, and people dismiss the whole thing as a fad which entranced the naive and foolish.  But eventually the long term results are substantive and real – and not always what you expected.


The developments in data science are very important, but it’s just as important to recognize the limits and the lags involved. The field will be damaged if people overhype it too much, or expect too much too soon.  That has often happened with artificial intelligence (see Nils Nilsson’s history of the field, The Quest for Artificial Intelligence.)

People usually get overconfident about new techniques. As one Stanford expert put it in the MIT Technology Review a few weeks ago,

For the most part, the AI achievements touted in the media aren’t evidence of great improvements in the field. The AI program from Google that won a Go contest last year was not a refined version of the one from IBM that beat the world’s chess champion in 1997; the car feature that beeps when you stray out of your lane works quite differently than the one that plans your route. Instead, the accomplishments so breathlessly reported are often cobbled together from a grab bag of disparate tools and techniques. It might be easy to mistake the drumbeat of stories about machines besting us at tasks as evidence that these tools are growing ever smarter—but that’s not happening.


I think if there’s better AI models in economics it won’t be machine learning – like pulling a kernel trick on a support vector machine – so much as a side-effect of new kinds of data becoming cheap and easy to collect. It will be more granular instantaneous data availability, not applying hidden Markov models or acyclic graphs. Nonlinear dynamics is still too hard, such models are inherently unpredictable, and in the case of the economy has a tendency to change if observed (Goodhart’s law with  a miaow from Heisenberg’s cat).


None of that new data will help, either,  if people don’t notice the data because they are overcommitted to their existing views. Data notoriously does not help change people’s minds as much as statisticians think.


AI has made huge strides on regular repetitive tasks, with millions or billions of repeatable events. I think it’s fascinating stuff. But general AI is as far away as ever. The field has a cycle of getting damaged when hopes run ahead of reality in the near term, which then produces an AI winter. Understanding a multi trillion dollar continental economy is much much harder than machine translation or driving.


There’s also a much deeper reason why machine learning isn’t a panacea for understanding the economy.  A modern economy is a complex dynamic system, and such systems are not predictable, even in principle.  Used correctly, machine learning can help you adapt, or change your model. But it is perhaps much more likely to be misused because of overenthusiastic belief it is a failsafe answer. Silicon Valley may be spending billions on machine learning research, with great success in some fields like machine translation, But there’s far less effort to spot gaps and missing assumptions and lack of error control loops – and that’s what interests me.
2017-05-31T15:12:24+00:00 May 31, 2017|AI and Computing, Forecasting, Quants and Models, Systems and Complexity|