The moment I discovered that the offending participant was drunk, I realized that I should have excluded his data in the first place, given that his decision-making ability was clearly compromised. So I threw out his data, and instantly the results looked beautiful—showing exactly what I expected them to show. But, a few days later I began thinking about the process by which I decided to eliminate the drunk guy. I asked myself: what would have happened if this fellow had been in the other condition—the one I expected to do worse? If that had been the case, I probably would not have noticed his individual responses to start with. And if I had, I probably would not have even considered excluding his data.
In the aftermath of the experiment, I could easily have told myself a story that would excuse me from using the drunk guy’s data. But what if he hadn’t been drunk? What if he had some other kind of impairment that had nothing to do with drinking? Would I have invented another excuse or logical argument to justify excluding his data? As we will see in chapter 7, “Creativity and Dishonesty,” creativity can help us justify following our selfish motives while still thinking of ourselves as honest people.
I decided to do two things. First, I reran the experiment to double-check the results, which worked out beautifully. Then I decided it was okay to create standards for excluding participants from an experiment (that is, we wouldn’t test drunks or people who couldn’t understand the instructions). But the rules for exclusion have to be made up front, before the experiment takes place, and definitely not after looking at the data.
What did I learn? When I was deciding to exclude the drunk man’s data, I honestly believed I was doing so in the name of science—as if I were heroically fighting to clear the data so that the truth could emerge. It didn’t occur to me that I might be doing it for my own self-interest, but I clearly had another motivation: to find the results I was expecting. More generally, I learned—again—about the importance of establishing rules that can safeguard ourselves from ourselves.
Disclosure: A Panacea?
So what is the best way to deal with conflicts of interest? For most people, “full disclosure” springs to mind. Following the same logic as “sunshine policies,” the basic assumption underlying disclosure is that as long as people publicly declare exactly what they are doing, all will be well. If professionals were to simply make their incentives clear and known to their clients, so the thinking goes, the clients can then decide for themselves how much to rely on their (biased) advice and then make more informed decisions.
If full disclosure were the rule of the land, doctors would inform their patients when they own the equipment required for the treatments they recommend. Or when they are paid to consult for the manufacturer of the drugs that they are about to prescribe. Financial advisers would inform their clients about all the different fees, payments, and commissions they get from various vendors and investment houses. With that information in hand, consumers should be able to appropriately discount the opinions of those professionals and make better decisions. In theory, disclosure seems to be a fantastic solution; it both exonerates the professionals who are acknowledging their conflicts of interest and it provides their clients with a better sense of where their information is coming from.
HOWEVER, IT TURNS
out that disclosure is not always an effective cure for conflicts of interest. In fact, disclosure can sometimes make things worse. To explain how, allow me to run you through a study conducted by Daylian Cain (a professor at Yale University), George Loewenstein (a professor at Carnegie Mellon University), and Don Moore (a professor at the University of California, Berkeley). In this experiment, participants played a game in one of two roles. (By the way, what researchers call a “game” is not what any reasonable kid would consider a game.) Some of the participants played the role of estimators: their task was to guess the total amount of money in a large jar full of loose change as accurately as possible. These players were paid according to how close their guess was to the real value of the money in the jar. The closer their estimates were, the more money they received, and it didn’t matter if they missed by overestimating or underestimating the true value.