Instead, we routinely take whatever memories are most recent or most easily remembered to be much more important than any other data. Consider, for example, an experience I had recently, driving across country and wondering at what time I'd arrive at the next motel. When traffic was moving well, I'd think to myself, "Wow, I'm driving at 80 miles per hour on the interstate; I'll be there in an hour." When traffic slowed due to construction, I'd say, "Oh no, it'll take me two hours." What I was almost comically unable to do was to take an average across two data points at the same time, and say, "Sometimes the traffic moves well, sometimes it moves poorly. I anticipate a mixture of good and bad, so I bet it will take an hour and a half."
Some of the world's most mundane but common interpersonal friction flows directly from the same failure to reflect on how well our samples represent reality. When we squabble with our spouse or our roommate about whose turn it is to wash the dishes, we probably (without realizing it) are better able to remember the previous times when we, ourself, took care of them (as compared to the times when our roommate or spouse did); after all, our memory is organized to focus primarily on our own experience. And we rarely compensate for that imbalance — so we come to believe we've done more work overall and perhaps end up in a self-righteous huff. Studies show that in virtually any collaborative enterprise, from taking care of a household to writing academic papers with colleagues, the sum of each individual's perceived contribution exceeds the total amount of work done. We cannot remember what other people did as well as we recall what we did ourselves — which leaves everybody (even shirkers!) feeling that others have taken advantage of them. Realizing the limits of our own data sampling might make us all a lot more generous.
Mental contamination is so potent that even entirely irrelevant information can lead us by the nose. In one pioneering experiment, the psychologists Amos Tversky and Daniel Kahneman spun a wheel of fortune, marked with the numbers 1-100, and then asked their subjects a question that had nothing to do with the outcome of spinning the wheel: what percentage of African countries are in the United Nations? Most participants didn't know for sure, so they had to estimate
— fair enough. But their estimates were considerably affected by
This phenomenon, which has come to be known as "anchoring and adjustment," occurs again and again. Try this one: Add 400 to the last three digits of your cell phone number. When you're done, answer the following question: in what year did Attila the Hun's rampage through Europe finally come to an end? The average guess of
*Nobody's ever been able to tell me whether the original question was meant to ask how many of the countries in Africa were in the UN, or how many of the countries in the UN were in Africa. But in a way, it doesn't matter: anchoring is strong enough to apply even when we don't know precisely what the question is.
people whose phone number, plus 400, yields a sum less than 600 was
A.D. 629, whereas the average guess of people whose phone number digits plus 400 came in between 1,200 and 1,399 was A.D. 979, 350 years later.*