Suppose we concoct a thousand different urban legends—new ones, not yet circulating on the World Wide Web—and carefully plant them in ten thousand different hearers, one to a customer, each story going to ten hearers. We try to give these meme candidates “radioactive tags†by including telltale details in each planted version, along the lines of “Did you hear about the Brazilian taxi-driver who…†And suppose we also spend lots of money tracking these trajectories, by hiring armies of private detectives to eavesdrop on our initial subjects, tapping their phones, and so forth (another virtue of thought experiments—you don’t have to clear them with your university’s internal review board or the police!), so that we get quite a lot of good data about which stories evaporate after a single telling, which actually get transmitted, in what words. The Sperberians’ dream result would be that we came up with…zilch! Almost all our radioactive tags would disappear, and all that would remain of the thousand different stories would be seven (say) stories that kept getting reinvented, time and again, because these seven stories were the only ones that tickled all the innate psychological constraints. When we looked at the lineages, we would see that, say, a hundred initially very different stories had all converged eventually on a single tale, the closest “attractor†in urban-legend space. Sometimes a story would be gradually modified in the direction of the favored attractor, but if the hearer already knew that tale, a new story might end abruptly in a cul-de-sac: “Hey, interesting. That reminds me—have you heard about the guy who…?â€
If this were the result, we would see that all the content in the urban legends that prevailed over time was already implicit in the psychology of the hearers and tellers, and virtually none was replicated faithfully from the initial stories. Here is Atran’s way of expressing the point:
In genetic evolution there is only “weak selection†in the sense that there are no strong determinants of directional change. As a result, the cumulative effects of small mutations (on the order of one in a million) can lead to stable directional change. By contrast, in cultural evolution, there is very “strong selection†in the sense that modularized expectations can powerfully constrain transmitted information into certain channels and not others. As a result, despite frequent “error,†“noise,†and “mutation†in socially transmitted information, the messages tend to be steered (snapped back or launched forward) into cognitively stable paths. Cognitive modules, not memes themselves, enable the cultural canalization of beliefs and practices. [2002, p. 248]
It would be almost as if we each have a CD in our brains with a few (dozen? hundred?) urban legends recorded on it; whenever we hear a close approximation to one of these urban legends, this triggers the CD to go to that track and play it—“triggered production,†not imitation of what we’ve heard. (This is suggested by Sperber’s “theoretical example†of the sound recorders [2000, p. 169].) That extreme null result is unlikely, of course, and if some content did get replicated from host to host, those who were infected by it would set up a new constraint on the fate of whatever urban legends they heard next. Cultural canalization can be due as much to prior cultural exposure as to one’s underlying cognitive modules. Perhaps, if you haven’t heard the one about the Chinese midget, you replicate the one about the boy with the pet gerbil and pass it along more or less intact, and if you have, you tend to merge them into something that eventually emerges as the one about the policewoman and the gerbil, and so forth. To investigate the interaction between contents culturally transmitted and constraints that are shared independently of culture, you really have to track the replication of memes—as best you can. Nobody said it was a practical research program in most instances.