Home/Human Error

When to dump a leader (Pelosi edition)

Many “leaders” have a tendency to  think that they ought to keep doing the same thing, but with more “passion,” or intensity, or resources. As I said in the post below, however, optimizing is not the same as adapting to a changed situation. There are many situations in which more persistence and determination just get you more trapped in doing the wrong thing. It’s essential tor recognize them. Most people don’t.

The unfortunate consequence is that change then requires a change in leadership as well. Maybe the Democratic Party is realizing that after multiple defeats: “Pelosi’s Democratic Critics Plot to Replace Her.” If things are persistently not working, try someone with a fresh look.

That also means that if you’re a leader it’s better to look for ways to adapt or change your mind before people plot to remove you after a massive setback.  The oldest danger of leadership is woodenheadedness. Yet most leaders hire consultants to put a theoretical or quantitative veneer on what they already think.

2017-06-22T15:33:53+00:00 June 23, 2017|Adaptation, Decisions, Human Error, Organizational Culture and Learning|

Rogues or Blind Spots?

I looked at how Volkswagen could go so wrong the other day. There is almost always a rush to blame human error or subordinates, I said. Some of them may be genuinely criminal and deserve jail time. But the problem is more usually also systemic: management doesn't see or want to see problems coming.

Now here's a piece in Harvard Business Review on the issue. Of course, it was rogue employees, says Volkswagen management.

Testifying unhappily before America’s Congress, Volkswagen of America CEO Michael Horn adamantly and defiantly identified the true authors of his company’s disastrous “defeat device” deception: “This was not a corporate decision. No board meeting or supervisory meeting has authorized this,” Horn declared. “This was a couple of rogue software engineers who put this in for whatever reason.”

Ach, du lieber! Put aside for the moment what this testimony implies about the auto giant’s purported culture of engineering excellence. Look instead at what’s revealed about Wolfsburg’s managerial oversight: utter and abysmal failure. No wonder Chairman and CEO Martin Winterkorn had to resign. His “tone at the top” let roguery take root.

The author is an MIT expert on the software processes at issue.

Always look to the leadership. Where were Volkswagen’s code reviews? Who took pride and ownership in the code that makes Volkswagen and Audi cars run? For digitally-driven innovators, code reviews are integral to healthy software cultures and quality software development.

Good code is integral to how cars work now, he says. And to write good code the Googles and Facebooks of the world have code review systems with some form of openness, even external advice or review, so that murky code is found out.

As we learned from financial fiascoes and what will be affirmed as Volkswagen’s software saga unwinds, rogues don’t exist in spite of top management oversight, they succeed because of top management oversight.

It can be comforting , in a way, to think that problems or bad decisions occur only because of individual stupidity or bias or error or ignorance. If people, and organizations as a whole don't even consciously see many problems coming, or ignore trade-offs, it's more disturbing and harder to solve. Most information and analysis will tend to reinforce their point of view. Single-minded mania also often produces short-run financial success.

Until the darkness comes. Leaders have to be held accountable for finding their blind spots. They can't claim ignorance after the fact.

 

2017-05-11T17:32:41+00:00 October 16, 2015|Current Events, Decisions, Human Error, Organizational Culture and Learning|

Volkswagen: What were they thinking?

It may turn into one of the most spectacular corporate disasters in history. What were Volkswagen thinking? Even after it became apparent that outsiders had noticed a discrepancy in emissions performance in on-the-road tests, the company still kept stonewalling and continued to sell cars with the shady software routines.

We won't know the murky, pathological details for a while. But understanding how this happens is urgent. If you ignore this kind of insidious problem, billion-dollar losses and criminal prosecutions can occur.

In fact, it's usually not just one or two “bad apples,” unethical criminals who actively choose stupid courses of action, although it often suits politicians and media to believe so. It's a system phenomenon, according to some of the classic studies (often Scandanavian) like Rasmussen and Svedung.

.. court reports from several accidents such as Bhopal, Flixborough, Zeebrügge, and Chernobyl demonstrate that they have not been caused by a coincidence of independent failures and human errors. They were the effects of a systematic migration of organizational behavior toward accident under the influence of pressure toward cost-effectiveness in an aggressive, competitive environment.

It's not likely anyone formally sat down and did an expected utility calculation, weighting financial and other benefits from installing cheat software, versus chances of being found out times consequent losses. So the usual way of thinking formally about decisions doesn't easily apply.

It's much more likely that it didn't occur to anyone in the company to step back and think it through. They didn't see the full dimensions of the problem. They denied there was a problem. They had blind spots.

It can often be hard to even find any point at which decisions were formally made. They just … happen. Rasmussen & co again:

In traditional decision research ‘decisions’ have been perceived as discrete processes that can be separated from the context and studied as an isolated phenomenon. However, in a familiar work environment actors are immersed in the work context for extended periods; they know by heart the normal flow of activities and the action alternatives available. During familiar situations, therefore, knowledge-based, analytical reasoning and planning are replaced by a simple skill- and rule-based choice among familiar action alternatives, that is, on practice and know-how.

Instead, the problem is likely to be a combination of the following:

  • Ignoring trade-offs at the top. Major accidents happen all the time in corporations because often the immediate benefits of cutting corners are tangible, quantifiable and immediate, while the costs are longer-term, diffuse and less directly accountable. They will be someone else's problem. The result is longer-term, more important goals get ignored in practice. Indeed, to define something as a technical problem or set strict metrics often embeds ignoring a set of trade-offs. So people never think about it and don't see problems coming.
  • Trade-offs can also happen because general orders come from the top – make it better, faster, cheaper and also cut costs – and reality has to be confronted lower down the line, without formally acknowledging choices have to be made. Subordinates have to break the formal rules to make it work. Violating policies in some way is a de facto requirement to keep your job, and then it is deemed “human error” when something goes wrong. The top decision-maker perhaps didn't formally order a deviation: but he made it inevitable. The system migrates to the boundaries of acceptable performance as lots of local, contextual decisions and non-decisions accumulate.
  • People make faulty assumptions, usually without realizing it. For example, did anyone think through how easy it to conduct independent on-the-road tests? That was a critical assumption on whether they would be found out.
  • If problems occur, it can become taboo to even mention them, particularly when bosses are implicated. Organizations are extremely good at not discussing things and avoiding clearly obvious contrary information. People lack the courage to speak up. There is no feedback loop.
  • Finally, if things do wrong, leaders have a tendency to escalate, to go for double -or-quits. And lose.

There scarcely seems to be a profession or industry or country without problems like this. The Pope was just in New York apologizing for years of Church neglect of the child abuse problem, for example.

But that does not mean that people are not culpable and accountable and liable for things they should have seen and dealt with. Nor is it confined to ethics or regulation. It is also a matter of seeing opportunity. You should see things. But how? That's what I'm interested in.

It's essential for organizational survival to confront these problems of misperceptions and myopia. They're system problems. And they are everywhere. Who blows up next?

One way to tell you’re getting into trouble

People have a tendency to think highly confident calls are a sign of expertise and credibility. More often, it’s a sign of ignorance and naiveté. That’s one of the lessons of scientific method.

Whenever a theory appears to you as the only possible one, take this as a sign you have neither understood the theory nor the problem which it was intended to solve.

Karl Popper, Objective Knowledge: An Evolutionary Approach

The trick is not to take confidence (or prominence) as a sign of credibility, or fall in love with a particular narrative. Instead, find a way to test assumptions and find the boundary conditions.

2015-02-09T11:45:12+00:00 February 13, 2015|Assumptions, Decisions, Expertise, Human Error|

Why Do Doctors Fail? People are more like Hurricanes than Ice cubes

Here’s something which is definitely worth listening to: Atul Gawande is giving the BBC’s prestigious Reith lectures on radio and podcast this year. A Professor at Harvard Medical School, Gawande is also a gifted writer of books such as The Checklist Manifesto.

He ponders why medical errors happen, despite the huge advances in medical science in the last fifty years. We know so much more than we did. So why do some of the brightest and most credentialed people in society still often get things wrong? Why does professional training and experience and skill often come up short? Gawande comes at it from a very practical angle – as a father whose baby son almost died from a missed heart defect, as a former Clinton advisor on health policy, as a practicing doctor and surgeon.

A large part of his answer is we tend to  assume mistakes are always culpable, and best ignored or concealed as quickly as possible. The first question in many organizations if something goes wrong is not how to fix it, but who to blame.

But the problem is some error is inevitable, especially in fields which are open to experiment and revision. It is not necessarily a sign of ignorance nor incompetence (although we can all think of times when it clearly is one or both of those problems.) That means that if you assume all error is just a sign of ignorance or ineptitude you can get into very serious trouble.

It goes to the nature of science itself, Gawande says.  He is fascinated by a classic 1976 article, Toward a Theory of Medical Fallibility by Samuel Gorovitz and (very famous philosopher) Alisdair Macintyre.

Some error is due to ignorance, the article says. Some is due to ineptitude – and this is where transparency and checklists can most help. But there is also a third category, which are things which are beyond the ability of science to predict individually (or even statistically.)

General laws can predict what will happen to an ice cube put in an oven to roast with amazing reliability. There is little significant variation among ice cubes in this respect. Yet, the article says,  general laws cannot easily predict the exact behavior of a particular hurricane – and people are more often like hurricanes than ice cubes.    For many practical tasks, what matters most is not what is generalized, but what is distinctive and unique about the individual case. (We still need vast amounts of particular,  individual data to forecast hurricane tracks.)  According to the Gorovitz and MacIntyre article,

Thus, principles of crystal formation or solubility are inferred from the observable characteristics of diverse particular crystals, but the differences among such crystals are not to the point; it is their similarities that support generalization. In contrast, what is important to the meteorologist, navigator, or veterinary surgeon is an understanding of particular, individual hurricanes, cloud formations, or cows, and thus what is distinctive about them as particulars is what is of crucial importance. How such particulars differ from one another in their diversity thus becomes as important as the characteristics they commonly share. Experience of a single entity over time is necessary for an understanding of that entity as a particular in all its distinctiveness, for its individual characteristics will not typically be inferable simply from what is known about the general—that is, commonly shared—characteristics of the type of entity of which it is an instance.

So there is unavoidable fallibility about particular errors.  Consider what this means for the economic and policy sphere as well, with the record of pervasive failed attempts to forecast and intervene in the economy.  General theory is often of little use.

What can you do about it?  Gawande emphasizes the importance of attention to detail, something he says was often lacking in much of the response to the Ebola virus in US hospitals. We understand how to contain a virus, he says, but people were careless or lacked knowledge about how to take contaminated clothes off afterwards.

But much has to do with a sense of boundary conditions, and  being open about and learning about errors – which is precisely what organizations often find most difficult to do. They ignore them, or “fight the last war” by overreacting to the most recent problem and ignoring the next difficulty. Or they believe all problems are like ice cubes, and develop linear models to handle them.  And academic disciplines are even more likely to dismiss challenges to cherished general theories.

It is in essence about being sensitive to contrary information, and that is what we focus on.  Think hurricanes instead of ice cubes. But it’s also a matter of the right level of detachment, which I’ll talk about next.

2017-05-11T17:32:42+00:00 December 8, 2014|Assumptions, Decisions, Human Error|

“People court failure in predictable ways.”

The basic driving necessity for me is to ask why so many decisions go wrong – in government, in business, in markets – and what can be done about it.

One answer is definitely wrong. Very few senior decision-makers need more information. They are drowning in it already. They don’t have the time to absorb it. They have to fight multiple urgent fires. They seldom have the chance to step back and look at the bigger picture, and theories about the economy and markets have proved highly unreliable in any case.

Another answer is not to think about how decisions go wrong at all.  People can often take a fatalistic attitude to problems, especially when they look at current events filled with economic policy failure, market misapprehension and instability, and geopolitical tension and war.  But that guarantees failure. And there are positive ways to lift the game.

So what is going to move the needle on performance? Recognizing patterns that lead to flawed decisions and mistakes. Putting markers down which alert you to turning points and thresholds. We need to look at process and how people react to evidence and arguments.

And there’s fifty years of research that has been done on that, very little of which has become more widely known.  One example is Dietrich Dörner’s classic The Logic of Failure.

Failure does not strike like a bolt from the blue; it develops gradually according to its own logic. As we watch individuals attempt to solve problems, we will see that complicated situations seem to elicit habits of thought that set failure in motion from the beginning.  From that point, the complexity of the task and the growing apprehension of failure encourage methods of decision-making that make failure even more likely, and then inevitable. We can learn, however. People court failure in predictable ways.

The starting point for Dörner is not bias. It is complexity, and the predictable ways in which people fail to deal adequately with it. People think in straight lines  but the world works in terms of loops. It is those same dynamic loops that cause problems for many linear algorithms and models, and leave an essential opening for humans to make a difference.

 

2017-05-11T17:32:43+00:00 September 1, 2014|Decisions, Human Error, Systems and Complexity|

Markets are Complex Systems – but most people don’t get that

One of the most successful investors of recent times has been Howard Marks. I took a look at his book about markets here. You can’t outperform if you don’t think better.

Thus, your thinking has to be better than that of others—both more powerful and at a higher level. Since other investors may be smart, well-informed and highly computerized, you must find an edge they don’t have. You must think of something they haven’t thought of, see things they miss or bring insight they don’t possess. You have to react differently and behave differently. In short, being right may be a necessary condition for investment success, but it won’t be sufficient. You must be more right than others . . . which by definition means your thinking has to be different.

First-level thinking, he says, is just having an opinion or forecast about the future. Second-level thinking, on the other hand, takes into account expectations, and the range of outcomes, and how people will react when expectations turn out to be wrong. Second-level thinkers are “on the alert for instances of misperception.”

Here’s a parallel. Marks doesn’t put it this way, but in essence it’s a matter of seeing markets as a nonlinear adaptive system, in the sense I was talking about in the last post. Second-level thinking is systems thinking.  Instead of linear straight lines,  markets react in complex feedback loops which depend on the existing stock of perception ( i.e expectations). Some of the greatest market players have an instinctive feel for this. But because of the limits of the human mind when it comes to complex systems,  most people have a great deal of trouble understanding markets.

That includes many mainstream economists. One obvious reason is price and price changes are one of the most important feedback loops in markets, but not the only feedback loop. A deeper reason is that most academics tend to be hedgehogs, interested in universal explanatory theories and linear prediction and “one big thing.”  But complex systems frustrate and falsify universal theories, because they change. The dominant loop changes, or new loops or added, or new players or goals change the nature of the system.

There’s another implication if you have a more systems-thinking view of markets. Complex adaptive systems are not predictable in their behavior. This, to me,  is a deeper reason for the difficulty of beating the market than efficient market theory. It isn’t so much that markets are hyper-efficient information processors that instantaneously adjust, as the fact they are complex. So consistent accurate prediction of their future state is impossible. It isn’t so much that markers are clearly mysteriously prone to statistically improbable 100- or 1000-year risks happening every 10 years. It’s that markets evolve and change, and positive feedback loops can take them into extreme territory with breathtaking speed that makes their behavior stray far from norms and equilibria.

“Tail Risks” are not the far end of a probability distribution,  as standard finance theory and policy thinking believes. They are positive feedback loops: cascades of events feed back on each other and change the behavior of the underlying system.  It’s not variance and volatility and fat-tailed distributions, but a matter of stocks and flows and feedback,  and tipping points which shift the dominant loop, and the underlying structure and changing relationship between components.

This view also helps understand why markets and policy resist change and stay in narrow stable ranges for long periods. Balancing feedback loops tend to kick in before long, producing resistance and inertia and cycles and pendulums, and making “this time it’s different” claims frequently a ticket to poverty.  Delays and time effects and variable  lags and cumulative effects matter profoundly in a way that simply doesn’t show up in linear models. Differential survival means evolutionary selection kicks in, changing behavior.

How can you make money if you can’t predict the future in complex systems, then? It’s clearly possible. Marks is a dazzlingly successful investor whose core belief is to be deeply skeptical of people who think they can make accurate predictions.

Awareness of the limited extent of our foreknowledge is an essential component of my approach to investing. I’m firmly convinced that (a) it’s hard to know what the macro future holds and (b) few people possess superior knowledge of these matters that can regularly be turned into an investing advantage.

You might be able to  know more than others about a single company or security, he says. And you can figure out where we might be in a particular cycle or pendulum. But broad economic forecasts and predictions are essentially worthless. Most forecasting is just extrapolation of recent data or events, and so tends to miss the big changes that would actually help people make money..

One key question investors have to answer is whether they view the future as knowable or unknowable. Investors who feel they know what the future holds will act assertively: making directional bets, concentrating positions, levering holdings and counting on future growth—in other words, doing things that in the absence of foreknowledge would increase risk. On the other hand, those who feel they don’t know what the future holds will act quite differently: diversifying, hedging, levering less (or not at all), emphasizing value today over growth tomorrow, staying high in the capital structure, and generally girding for a variety of possible outcomes.

In other words, a belief in prediction tends to go with a belief in making overconfident, aggressive big bets, sometimes being lucky –  and then flaming out. The answer? Above all, control your risks, Marks says. Markets are a “loser’s game”, like amateur tennis. It’s extremely hard to hit winners. Instead, avoid hitting losers. Make sure you have defense as as well as offense.

Offense is easy to define. It’s the adoption of aggressive tactics and elevated risk in the pursuit of above-average gains. But what’s defense? Rather than doing the right thing, the defensive investor’s main emphasis is on not doing the wrong thing.

Thinking about what can go wrong is not purely negative, however. It’s not a matter of being obsessed with biases. Instead, it’s a way to be more creative and agile in adapting to change. If markets are complex systems, the key, as Herbert Simon puts it, is not prediction but “robust adaptive procedures.”

To stress the point again – people don’t intuitively understand systems. And many of our analytics and standard theories get them even less.  But it’s the way markets and policy work.

 

2017-05-11T17:32:43+00:00 August 9, 2014|Decisions, Human Error, Investment, Market Behavior, Perception, Risk Management|

Two Kinds of Error (part 3)

I’ve been talking about the difference between variable or random error,  and systemic or constant errors.  Another way to put this is the difference between precision and accuracy. As business measurement expert Douglas Hubbard explains in How to Measure Anything: Finding the Value of Intangibles in Business,

“Precision” refers to the reproducibility and conformity of measurements, while “accuracy” refers to how close a measurement is to its “true” value. .. To put it another way, precision is low random error, regardless of the amount of systemic error. Accuracy is low systemic error, regardless of the amount of random error. … I find that, in business, people often choose precision with unknown systematic error over a highly imprecise measurement with random error.

Systemic error is also, he says, another way of saying “bias”, especially expectancy bias – another term for confirmation bias, seeing what we want to see –  and selection bias – inadvertent non-randomness in samples.

Observers and subjects sometimes , consciously or not, see what they want. We are gullible and tend to be self-deluding.

That brings us back to the problems on which Alucidate sets its sights. Algorithms can eliminate most random or variable error, and bring much more consistency. But systemic error is then the main source of problems or differential performance. And businesses are usually knee-deep in it, partly because the approaches which reduce variable error often increase systemic error in practice. There’s often a trade-off between dealing with the two kinds of error, and that trade-off may need to be set differently in different environments.

I  like most of Hubbard’s book, which I’ll come back to another time. It falls into the practical, observational school of quantification rather than the math department approach, as Herbert Simon would put it.

But one thing he doesn’t focus on enough is learning ability and iteration – the ability to change your model over time.  If you shoot at the target and observe you hit slightly off center, you can adjust the next time you fire. Sensitivty to evidence and the ability to learn is the most important thing to watch in macro and market decision-making. In fact, the most interesting thing in the recent enthusiasm about big data is not the size of datasets or finding correlations. It’s the improved ability of computer algorithms to test and adjust models – Bayesian inversion. But that has limits and pitfalls as well.

Two Kinds of Error (part 2)

Markets increasingly rely on quantitative techniques. When does formal analysis help make better decisions? In the last post I was talking about the difference between the errors produced by “analytic” and “intuitive” thinking in the context of the collapse of formal planning in corporate America.  Those terms can be misleading, however, because it implies it is somehow all a matter of rational technique versus “gut feel.”

Here’s another diagram, from near the beginning of James Reason’s classic analysis, Human Error (an extremely important book which I’ll come back to another time.) Two marksmen aim at a target, but the pattern of errors is very different.

 

JamesReason 1

 

A is the winner, based on raw scores. He is roughly centered, but dispersed and sloppy. B is much more consistent but off target.

This shows the difference between variable error (A), on the one hand, and constant or systematic error (B) on the other. B is probably the better marksman even though he lost, says Reason, because his sights could be misaligned or there could be an additional factor throwing him off. B’s error is more predictable, and potentially more fixable. But fixing it depends on the extent to which the reasons for the error are understood.

What else could cause B to be off? (Reason doesn’t discuss this in the context). In real life decisions (and real life war) the target is often moving, not static. That means errors like B makes are pervasive.

Let’s relate this back to one of the central problems for making decisions or relying on advice or expertise. Simple linear models make far better predictions than people in a vast number of situations.  This is the Meehl problem, which we’ve known about for fifty years. In most cases, reducing someone’s own expertise to a few numbers in a linear equation will predict outcomes much better than the person him- or her-self. Yes, reducing all your years of experience to three or four numerical variables and sticking them in a spreadsheet will mostly outperform your own best judgement. (It’s called ‘bootstrapping.’)

In fact, the record of expert prediction in economics and politics – the areas markets care about – is little better than chimps throwing darts. This is the Tetlock problem, which is inescapable for any research firm or hedge fund since he published his book in 2005.  Why pay big bucks to hire chimps?

But the use of algorithms in markets and formal planning in corporations has also produced catastrophe. It isn’t just the massive failure of many models during the global financial crisis. The most rigorously quant-based hedge funds are still  trailing the indices, and it seems like the advantage quant techniques afforded is actually becoming a vulnerability as so many firms use the same kind of VAR models.  So what’s the right answer?

Linear models perform better than people in many situations because they reduce or eliminate the variable error. Here’s how psychologist Johnathan Baron explains why simplistic models usually outperform the very person or judge they are based on,  in a chapter on Quantitative judgment in his classic text on decision-making, Thinking and Deciding:

Why does it happen? Basically, it happens because the judge cannot be consistent with his own policy .. He is unreliable, in that he is likely to judge the same case differently on different occasions (unless he recognizes the case the second time. As Goldberg (1970) puts it…. ‘If we could remove some of this human unreliability by eliminating the random error in his judgements, we should thereby increase the validity of the resulting predictions.’  (p406)

But algorithms and planning and formal methods find it much harder to deal with systematic error. This is why, traditionally, quantitative methods have been largely confined to lower and middle-management tasks like inventory or production scheduling or routine valuation.  Dealing with systematic error requires the ability to recognize and learn.

Formal optimization does extremely well in static, well-understood, repetitive situations. But once the time period lengthens, so change becomes more likely, and the complexity of the situation increases, formal techniques can produce massive systematic errors.  The kind that kill companies and careers.

What’s the upshot? It isn’t an argument against formal models. I’m not remotely opposed to quantitative techniques. But it is a very strong argument for looking at boundary conditions for the applicability of techniques to different problems and situations. It’s like the Tylenol test I proposed here: two pills cure a headache. Taking fifty pills at once will probably kill you.

It is also  a very strong argument for looking carefully at perception, recognition and reaction to evidence as the core of any attempt to find errors and blind spots. It is essential to have a way to identify and control systematic errors as well as variable errors.  Many companies try to have a human layer of judgment as a kind of check on the models for this reason. But that’s where all the deeper problems of decision-making like confirmation bias and culture rear their head. The only real way to deal with the problem is to have an outside firm which looks for those specific problems.

You can’t seek alpha or outperformance by eliminating variable error any more. That’s been done, as we speak, by a million computers running algorithms. Markets are very good at that. The only way to get extra value is to look for systematic errors.

 

2017-05-11T17:32:44+00:00 May 12, 2014|Adaptation, Decisions, Human Error, Lens Model, Quants and Models, Risk Management|

Two kinds of error (part 1)

How do we explain why rigorous, formal processes can be very successful in some cases, and disastrous in others? I was asking this in reference to Henry Mintzberg’s research on the disastrous performance of formal planning. Mintzberg cites earlier research on different kinds of errors in this chart (from Mintzberg, 1994, p327).

 
Mintzberg Diagram

 

.. the analytic approach to problem solving produced the precise answer more often, but its distribution of errors was quite large. Intuition, in contrast, was less frequently precise but more consistently close. In other words, informally, people get certain kinds of problems more or less right, while formally , their errors, however infrequent, can be bizarre.

This is important, because it lies underneath a similar distinction that can be found in many other places. And because the field of decision-making research is so fragmented, the similar answers usually stand alone and isolated.

Consider, for example, how this relates to  Nicholas Nassim Taleb’s distinction between Fragile and Antifragile approaches and trading strategies. Think of exposure, he says, and the size and risk of the errors you may make.

A lot depends on whether you want to rigorously eliminate small errors, or watch out for really big errors.