With nothing to rely on except unconvincing anecdotes and inconclusive trials, the arguments for and against homeopathy were deadlocked. Then, in 1997, an international research team took a dramatic step towards settling the homeopathy debate. They were led by Klaus Linde, a senior figure at the Munich-based Centre for Complementary Medicine Research. He and his colleagues decided to examine the considerable body of research into homeopathy in order to develop an over-arching conclusion that took into consideration each and every trial. This is known as a
Although the term meta-analysis might be unfamiliar to many readers, it is a concept that crops up in a range of familiar situations where it is important to make sense of lots of data. In the run-up to a general election, for instance, several newspapers might publish opinion polls with conflicting results. In this situation it would be sensible to combine all the data from all the polls, which ought to lead to a more reliable conclusion than any single poll, because the meta-poll (i.e. poll of polls) reflects the complete data from a much larger group of voters.
The power of meta-analysis becomes obvious if we examine some hypothetical sets of data concerning astrology. If your astrological sign determined your character, then an astrologer should be able to identify a person’s star sign after an interview. Imagine that a series of five experiments is conducted around the world by rival research groups. In each case, the same astrologer is simply asked to identify correctly a person’s star sign based on a five-minute conversation. The experiments range in size from 20 to 290 participants, but the protocol is the same in each case. Chance alone would give rise to a success rate of one correct identification (or hit) in twelve, so the astrologer would have to do significantly better than this to give credence to the notion of astrology. The five experiments lead to the following success rates:
On its own, the third experiment seems to suggest that astrology works, because a hit rate equivalent to 5 out of 20 is much higher than chance would predict. Indeed, the majority of experiments (three out of five) imply a higher than expected hit rate, so one way to interpret these sets of data would be to conclude that, in general, the experiments support astrology. However, a meta-analysis would come to a different conclusion.
The meta-analysis would start by pointing out that the number of attempts made by the astrologer in any one of the experiments was relatively small, and therefore the result of any single experiment could be explained by mere chance. In other words, the result of any one of these experiments is effectively meaningless. Next, the researcher doing the meta-analysis would combine all the data from the individual experiments as though they were part of one giant experiment. This tells us that the astrologer had 49 hits out of 600 in total, which is equivalent to a hit rate of 0.98 out of 12, which is very close to 1 out of 12, the hit rate expected by chance alone. The conclusion of this hypothetical meta-analysis would be that the astrologer has demonstrated no special ability to determine a person’s star sign based on their personality. This conclusion is far more reliable than anything that could have been deduced solely from any one of the small-scale experiments. In scientific terms: a meta-analysis is said to minimize random and selection biases.