Why do companies like SAC get themselves into such (alleged) trouble? It’s an important question for anyone who wants to make sure their own organization isn’t crippled by bad behavior. Just think how many incidents of foolish behavior we’ve seen in recent years, from LIBOR manipulation to Madoff’s theft and the exploitation of clients at some investment banks. HSBC narrowly avoided criminal indictment for money laundering, but got hit by a $2 billion fine.
It’s not just finance, either, despite finger-pointing by activists. The biggest-selling British newspaper was vaporized by phone-hacking journalists. Detroit has just gone bankrupt in large part because of a generation-long looting of the city by corrupt “progressive” politicians. And let’s not mention the church.
Sometimes there is clearly pure venality, and every barrel has a few rotten apples.
Just as often, I think, people just can’t see the serious risks they are running. Or, because certain kinds of behavior seem normal because “everyone is doing it”, it is convenient not to ask questions. Groupthink takes over. People lose perspective. They see the immediate gain and deny the existence of longer-term costs. They focus on one goal to the exclusion of all others, and common sense as well. Indictments and billion-dollar fines follow.
As Dan Ariely says in his book Predictably Irrational: The Hidden Forces That Shape Our Decisions:
We can hope to surround ourselves with good, moral people, but we have to be realistic. Even good people are not immune to being partially blinded by their own minds. This blindness allows them to take actions that bypass their own moral standards on the road to financial rewards. In essence, motivation can play tricks on us whether or not we are good, moral people. As the author and journalist Upton Sinclair once noted, “It is difficult to get a man to understand something when his salary depends upon his not understanding it.”
People find it convenient to be in denial right up to the point everything collapses. Decision-makers often have an astonishing capacity not to see things right in front of them. Blind spots destroy organizations.
There is a certain cast of mind that finds it very difficult to grasp or accept that something may be a very good idea up to a point – but not beyond it.
Some people yearn for universal rules. If the tools of economics or quantitative methods or Marxism or sociology or Kantian ethics or government regulation have any usefulness, they believe, they ought to apply to everything, always. If there is a problem, the answer is always even more of the same.
But common sense says that’s not actually true. Two Tylenol pills will cure a headache. That does not mean taking forty pills at once will be even better.
Call that the Tylenol Test. What is good in small doses can kill you in excess.
You’ll see I often talk about the limits of things here, whether it’s big data or academic economics or the impact of monetary policy. That does not mean those things – and others like them – are not useful, or valuable, or sometimes essential. I support all of them extremely strongly in the right circumstances. They can clear confusion and cure serious headaches.
It’s not a binary good/bad distinction.
Instead, it’s the right medicine at the right time in the right amount that keeps you healthy and makes you ready to seize opportunities. (That echoes the ancient Aristotelian idea of the golden mean, of course.)
If you overdose, you’re in trouble. Maybe that’s a universal rule….
People like to get excited about big data. Here's a discussion of where the term “Big Data” came from, in the NYT today. It was probably the Chief Scientist of Silicon Graphics who started using it in lunch presentations in the 1990s.
Since I first looked at how he used the term, I liked Mr. Mashey as the originator of Big Data. In the 1990s, Silicon Graphics was the giant of computer graphics, used for special-effects in Hollywood and for video surveillance by spy agencies. It was a hot company in the Valley that dealt with new kinds of data, and lots of it.
Some people have vast faith in big data to solve problems.
Investors' hopes for Facebook and similar companies are tied to monetizing the data they are racking up on consumer behavior, as this earlier article in the MIT Technology Review explains. And sure, somebody is bound to figure out some useful ways to use that vast amount of information.
But big data is a tool, useful for some things in the way a hammer is good for driving nails but not much good for disassembling a Swiss watch.
I heard assurances from banking economists during the housing boom that mortgage credit scores had become astonishingly precise, advanced means to “drill down” into a consumer's creditworthiness too. It didn't work out that way.
So here's another op-Ed piece from December: “Big Data is Great. But so is Intuition.” In fact, these “scientific behavior” traditions and hopes are at least a century old, going back to Frederick Taylor and his stopwatch. And many people still find it hard to see limits.
At the M.I.T. conference, a panel was asked to cite examples of big failures in Big Data. No one could really think of any. Soon after, though, Roberto Rigobon could barely contain himself as he took to the stage. Mr. Rigobon, a professor at M.I.T.’s Sloan School of Management, said that the financial crisis certainly humbled the data hounds. “Hedge funds failed all over the world,” he said.
The problem is that a math model, like a metaphor, is a simplification. This type of modeling came out of the sciences, where the behavior of particles in a fluid, for example, is predictable according to the laws of physics.
In so many Big Data applications, a math model attaches a crisp number to human behavior, interests and preferences. .”
And that is the problem: when interests and preferences (and perception) are hard to model. Like so many things, big data is valuable up to a point. You have to know where that point is. Especially if you are a hedge fund.