Читаем Kluge: The Haphazard Construction of the Human Mind (Houghton Mifflin; 2008) полностью

No matter what we humans think about, we tend to pay more attention to stuff that fits in with our beliefs than stuff that might challenge them. Psychologists call this "confirmation bias." When we have embraced a theory, large or small, we tend to be better at noticing evidence that supports it than evidence that might run counter to it.

Consider the quasi-astrological description that opened this chapter. A person who wants to believe in astrology might notice the parts that seem true ("you have a need for other people to like and admire you") and ignore the parts that aren't (maybe from the outside you don't really look so disciplined after all). A person who wishes to believe in horoscopes may notice the one time that their reading seems dead-on and ignore (or rationalize) the thousands of times when their horoscopes are worded so ambiguously that they could mean anything. That's confirmation bias.

Take, for example, an early experiment conducted by the British psychologist Peter Wason. Wason presented his subjects with a triplet of three distinct numbers (for example, 2-4-6) and asked them to guess what rule might have generated their arrangement. Subjects were then asked to create new sequences and received feedback as to whether their new sequences conformed to the rule. A typical subject might guess "4-6-8," be told yes, and proceed to try "8-10-12" and again be told yes; the subject might then conclude that the rule was something like "sequences of three even numbers with two added each time." What most people failed to do, however, was consider potentially disconfirming evidence. For example, was 1-3-5 or 1-3-4 a valid sequence? Few subjects bothered to ask; as a consequence, hardly anybody guessed that the actual rule was simply "any sequence of three ascending numbers." Put more generally, people all too often look for cases that confirm their theories rather than consider whether some alternative principle might work better.

In another, later study, less benign, two different groups of people saw a videotape of a child taking an academic test. One group of viewers was led to believe that the child came from a socioeconomically privileged background, the other to believe that the child came from a socioeconomically impoverished background. Those who thought the child was wealthier reported that the child was doing well and performing above grade level; the other group guessed that the child was performing below grade level.

Confirmation bias might be an inevitable consequence of contextually driven memory. Because we retrieve memory not by systematically searching for all relevant data (as computers do) but by finding things that match, we can't help but be better at noticing things that confirm the notions we begin with. When you think about the O. J. Simpson murder trial, if you were predisposed to think he was guilty, you're likely to find it easier to remember evidence that pointed toward his guilt (his motive, the DNA evidence, the lack of other plausible suspects) rather than evidence that cast doubt on it (the shoddy police work and that infamous glove that didn't fit).

To consider something well, of course, is to evaluate both sides of an argument, but unless we go the extra mile of deliberately foreing ourselves to consider alternatives — not something that comes naturally — we are more prone to recall evidence consistent with an accepted proposition than evidence inconsistent with it. And since we most clearly remember information that seems consistent with our beliefs, it becomes very hard to let those beliefs go, even when they are erroneous.

The same, of course, goes for scientists. The aim of science is to take a balanced approach to evidence, but scientists are human beings, and human beings can't help but notice evidence that confirms their own theories. Read any science texts from the past and you will stumble on not only geniuses, but also people who in hindsight seem like crackpots — flat-earthers, alchemists, and so forth. History is not kind to scientists who believed in such fictions, but a realist might recognize that in a species so dependent on memory driven by context, such slip-ups are always a risk.

Перейти на страницу:

Похожие книги

Взаимопомощь как фактор эволюции
Взаимопомощь как фактор эволюции

Труд известного теоретика и организатора анархизма Петра Алексеевича Кропоткина. После 1917 года печатался лишь фрагментарно в нескольких сборниках, в частности, в книге "Анархия".В области биологии идеи Кропоткина о взаимопомощи как факторе эволюции, об отсутствии внутривидовой борьбы представляли собой развитие одного из важных направлений дарвинизма. Свое учение о взаимной помощи и поддержке, об отсутствии внутривидовой борьбы Кропоткин перенес и на общественную жизнь. Наряду с этим он признавал, что как биологическая, так и социальная жизнь проникнута началом борьбы. Но социальная борьба плодотворна и прогрессивна только тогда, когда она помогает возникновению новых форм, основанных на принципах справедливости и солидарности. Сформулированный ученым закон взаимной помощи лег в основу его этического учения, которое он развил в своем незавершенном труде "Этика".

Петр Алексеевич Кропоткин

Биология, биофизика, биохимия / Политика / Биология / Образование и наука / Культурология