The regulatory agency example is a case where the institutional incentive structure has to compete with an outside incentive structure that is more attractive financially. Incentive structures can have problems in themselves, aside from outside competition. The mere process of formalizing what is to be rewarded presents many complexities and pitfalls. Most problems, decisions, and performances are multidimensional, but somehow the results have to be reduced to a few key indicators which are to be institutionally rewarded or penalized: attendance records, test scores, output per unit of time, seniority, etc. The need to reduce the indicators to a manageable few is based not only on the need to conserve the time (and sanity) of those who assign rewards and penalties, but also to provide those subject to these incentives with some objective indication of what their performance is expected to be and how it will be judged. But, almost by definition, key indicators can never tell the whole story. This affects not merely the justice or injustice of the reward, but also the very nature of the behavior that occurs within the given structure of incentives. For example, one index of military success is the number of enemy killed. Clearly, it is not the
Key indicators require some specified time span during which they are to be tabulated for purposes of reward or penalty. The time span can vary enormously according to the process and the indicator. It can be output per hour, the annual rate of inflation, weekly television program ratings, or a bicentennial assessment of a nation. But whatever the span chosen, it must involve some simplification, or even oversimplification, of reality. Time is continuous, and breaking it up into discrete units for purposes of assessment and reward opens the possibility that behavior will be tailored to the time period in question, without regard to its longer range implications. Desperate efforts just before a deadline may be an inefficient expedient which reduces the longer run effectiveness of men, machines and organizations. The Soviets coined the term “storming” to describe such behavior, which has long been common in Soviet factories trying to achieve their monthly quotas. Similar behavior occurred on an annual basis in Soviet farms trying to maximize the current year’s harvest, even at the cost of neglecting the maintenance of equipment and structures, and at the cost of depleting soil by not allowing it to lie fallow to recover its long run fertility. Slave overseers in the antebellum South similarly overworked both men and the soil in the interest of current crops at the expense of reduced production years later — when the overseer would probably be working somewhere else. In short, similar structures of incentives produced similar results, even in socioeconomic systems with widely differing histories, ideologies, and rhetoric.
The broad sweep of knowledge needed for decision making is brought to bear through various systems of coordination of the scattered fragmentary information possessed by individuals and organizations. This very general sketch of the principles, mechanisms, and pitfalls involved is a prelude to a fuller consideration of the use of knowledge in decision-making processes in the economic, legal, and political spheres, each having its own authentication processes and its own feedback mechanisms to modify decisions already made. Much discussion of the pros and cons of various “issues” overlooks the crucial fact that the most basic decision is