Читаем Kluge: The Haphazard Construction of the Human Mind (Houghton Mifflin; 2008) полностью

What would be the rational thing to do? According to the theory of rational choice, you should calculate your "expected utility," or expected gain, essentially averaging the amount you would win across all the possible outcomes, weighted by their probability. An 11 percent chance at $1 million works out to an expected gain of $110,000; 10 percent at $5 million works out to an expected gain of $500,000, clearly the better choice. So far, so good. But when you apply the same logic to the first set of choices, you discover that people behave far less rationally. The expected gain in the lottery that is split 89 percent/ 10 percent/i percent is $1,390,000 (89 percent of $1 million plus 10 percent of $5 million plus 1 percent of $0), compared to a mere million for the sure thing. Yet nearly everyone goes for the million bucks

— leaving close to half a million dollars on the table. Pure insanity from the perspective of "rational choice."

Another experiment offered undergraduates a choice between two raffle tickets, one with 1 chance in 100 to win a $500 voucher toward a trip to Paris, the other, 1 chance in 100 to win a $500 voucher toward college tuition. Most people, in this case, prefer Paris. No big problem there; if Paris is more appealing than the bursar's office, so be it. But when the odds increase from 1 in 100 to 99 out of 100, most people's preferences reverse; given the near certainty of winning, most students suddenly go for the tuition voucher rather than the trip — sheer lunacy, if they'd really rather go to Paris.

To take an entirely different sort of illustration, consider the simple question I posed in the opening chapter: would you drive across town to save $25 on a $100 microwave? Most people would say yes, but hardly anybody would drive across town to save the same $25 on a $1,000 television. From the perspective of an economist, this sort of thinking too is irrational. Whether the drive is worth it should depend on just two things: the value of your time and the cost of the gas, nothing else. Either the value of your time and gas is less than $25, in which case you should make the drive, or your time and gas are worth more than $25, in which case you shouldn't make the drive — end of story. Since the labor to drive across town is the same in both cases and the monetary amount is the same, there's no rational reason why the drive would make sense in one instance and not the other.

On the other hand, to anyone who hasn't taken a class in economics, saving $25 on $100 seems like a good deal ("I saved 25 percent!"), whereas saving $25 on $1,000 appears to be a stupid waste of time ("You drove all the way across town to get 2.5 percent off? You must have nothing better to do"). In the clear-eyed arithmetic of the economist, a dollar is a dollar is a dollar, but most ordinary people can't help but think about money in a somewhat less rational way: not in absolute terms, but in relative terms.

What leads us to think about money in (less rational) relative terms rather than (more rational) absolute terms?

To start with, humans didn't evolve to think about numbers, much less money, at all. Neither money nor numerical systems are omnipresent. Some cultures trade only by means of barter, and some have simple counting systems with only a few numerical terms, such as one, two, many. Clearly, both counting systems and money are cultural inventions. On the other hand, all vertebrate animals are built with what some psychologists call an "approximate system" for numbers, such that they can distinguish more from less. And that system in turn has the peculiar property of being "nonlinear": the difference between 1 and 2 subjectively seems greater than the difference between 101 and 102. Much of the brain is built on this principle, known as Weber's law. Thus, a 150-watt light bulb seems only a bit brighter than a 100-watt bulb, whereas a 100-watt bulb seems much brighter than a 50-watt bulb.

In some domains, following Weber's law makes a certain amount of sense: a storehouse of an extra 2 kilos of wheat relative to a baseline of 100 kilos isn't going to matter if everything after the first kilos ultimately spoils; what really matters is the difference between starving and not starving. Of course, money doesn't rot (except in times of hyperinflation), but our brain didn't evolve to cope with money; it evolved to cope with food.

Перейти на страницу:

Похожие книги

Взаимопомощь как фактор эволюции
Взаимопомощь как фактор эволюции

Труд известного теоретика и организатора анархизма Петра Алексеевича Кропоткина. После 1917 года печатался лишь фрагментарно в нескольких сборниках, в частности, в книге "Анархия".В области биологии идеи Кропоткина о взаимопомощи как факторе эволюции, об отсутствии внутривидовой борьбы представляли собой развитие одного из важных направлений дарвинизма. Свое учение о взаимной помощи и поддержке, об отсутствии внутривидовой борьбы Кропоткин перенес и на общественную жизнь. Наряду с этим он признавал, что как биологическая, так и социальная жизнь проникнута началом борьбы. Но социальная борьба плодотворна и прогрессивна только тогда, когда она помогает возникновению новых форм, основанных на принципах справедливости и солидарности. Сформулированный ученым закон взаимной помощи лег в основу его этического учения, которое он развил в своем незавершенном труде "Этика".

Петр Алексеевич Кропоткин

Биология, биофизика, биохимия / Политика / Биология / Образование и наука / Культурология