Pretty much all of us are prone to “bias blindness.” We can easily spot prejudice in others, but we’re oblivious to our own, insisting on our impartiality in spite of any and all evidence to the contrary.
Newly published research suggests this problem is actually worse than we thought. It finds that even when people use an evaluation strategy they concede is biased, they continue to insist their judgments are objective.
“Recognizing one’s bias is a critical first step in trying to correct for it,” writes a research team led by Emily Pronin and Katherine Hansen of Princeton University. “These experiments make clear how difficult that first step can be to reach.”
“Even when people acknowledge that what they are about to do is biased, they still are inclined to see their resulting decisions as objective.”
Although their findings have clear implications regarding political opinions, the researchers avoided such fraught topics and focused on art. In two experiments, participants (74 Princeton undergraduates in the first, 85 adults recruited online in the second) looked at a series of 80 paintings and rated the artistic merit of each on a one-to-nine scale.
Half of the participants were instructed to note the name of the artist (which was flashed onto the screen when they pushed a specific button) before making their evaluation. The others evaluated the works without knowing who painted them.
When that button was pushed, half of the paintings were identified as the product of a famous artist (usually the one who actually created the work). The others were identified as a work by an unknown artist; researchers “consulted a phone book to obtain names to assign to those paintings.”
The students who saw these names conceded that the format lent itself to bias. But they “did not view their own evaluations as any less objective than did participants in the explicitly objective condition,” the researchers note.
Not surprisingly, their evaluations were, in fact, biased by the information: They rated the merit of painters attributed to great artists higher than those works purportedly created by unknowns. On the other hand, those who did not see the alleged names of the artists “rated the artistic merit of the two groups of paintings the same,” the researchers write.
For the online experiment, the researchers added some extra precautions, explicitly pointing out that looking at the artists’ names could create bias, “in that paintings by famous painters could be rated more highly, regardless of their actual quality.”
Nevertheless, the results were identical to the first experiment. Indeed, in spite of the warning, those who saw the painters’ names “became yet more convinced of their objectivity.”
The results add to the evidence that “people have difficulty recognizing their own biases,” the researchers conclude. “It shows that even when people acknowledge that what they are about to do is biased, they still are inclined to see their resulting decisions as objective.”
This false sense of fairness and impartiality can lead all to sorts of problems. To cite just one, the researchers note that a juror who is certain he or she won’t take into account testimony ruled inadmissible may, in fact, be swayed by it.
So what’s the answer here? Pronin and her colleagues argue that the best strategy is presenting information in such a way that prevents bias. They point to the tradition of orchestras having potential members audition from behind a screen, so that they are judged solely on their talent, as opposed to their race, gender, age, or any other extraneous factors.
The researchers note that such strategies are effective, but we’re reluctant to use them because we have such strong confidence in our own objectivity. Their work provides new evidence that such faith is, sadly, unwarranted.