How partisans see facts through different eyes
Those who want to restrict travel from Muslim countries or sales of assault weapons use one rationale to buttress their arguments and a different one to dismiss their opponents’, according to new research from Boulder
In our politically polarized country, we often hear this common refrain: If people on both sides of the aisle could simply look at the same facts, they’d be able to see eye-to-eye, have measured discussions and enact reasonable laws and policies.
But new research from the University of Colorado Boulder, published in March in the journal Cognition, suggests that when people with differing political views are provided with the same statistics—and they believe that those facts are accurate—they prioritize the information differently, based on their existing opinions.
The findings provide one explanation for the sharply diverging opinions people have when it comes to polarizing policies—we’re literally perceiving the facts differently.
“We often assume when dealing with the partisan divide that if we could give everyone the same information and get them to believe in the accuracy of that information, we would reduce partisan conflict,” said Leaf Van Boven, a Boulder professor of psychology and neuroscience. “But what this research shows is that even when you give people the same information, they can have very different partisan reactions to that underlying information.”
Van Boven and a team of co-authors set out to study our reasoning using “conditional probabilities,” or the likelihood that something will occur based on one or more conditions having occurred.
More specifically, they looked at what statistics people considered to be the most important when contemplating policies that restrict broad categories of people or actions to lower the risk of rare events, such as a travel ban for immigrants from majority-Muslim countries to reduce terrorist attacks and a ban on the sale of assault weapons to curb mass shootings.
The researchers were inspired, in part, by how politicians and pundits used conditional probabilities to defend restrictive policies. After the Sept. 11 terrorist attacks, for example, conservative commentator Ann Coulter advocated for the expulsion of Muslim immigrants from the country, writing that “all terrorists are Muslims.”
“We’re particularly interested in these policies involving rare events because policy makers often use conditional probabilities to explain why a policy is useful, and there’s a lot of research suggesting that people actually have a tough time thinking about conditional probabilities,” said Jairo Ramos, Boulder a graduate student in social psychology and one of the study’s co-authors.
When evaluating the effectiveness of a policy intended to reduce terrorism, for instance, it’s more relevant to consider the vanishingly small fraction of Muslim immigrants who commit terrorist attacks, rather than the fraction of immigrant terrorists who come from Muslim countries, the researchers write.
In essence, because the percentage of Muslim immigrants who are terrorists is extremely small, banning all Muslims from entering the country would reduce an extremely small threat to a somewhat smaller threat, the researchers explain. And yet, many people are motivated by statements like the one made by Coulter—that the proportion of immigrant terrorist attacks are committed by Muslims is relatively high.
The researchers used a less-polarizing example to clarify this point: professional basketball players. While a majority of NBA players are African American, only a small fraction of African American males play in the NBA.
“You would never try to look for potential NBA recruits by starting with all African American males—that would be an incredibly wasteful strategy,” Van Boven said. “But it’s really the same thing we do when we first look at Muslim immigrants.”
Considering probabilities
To explore this phenomenon in the context of politically polarized policies, the researchers asked more than 500 American adults to review a list of statistics related to terrorism or mass shootings, then select which statistic they considered the most important for evaluating policies meant to reduce the risk of those events. Participants also considered this question from the perspective of an unbiased expert and someone with an opposing viewpoint to their own. (Co-authors in Israel ran a similar study using a policy to expel asylum seekers from Tel Aviv to reduce crime.)
As suspected, participants selected the probability that supported their existing stance on the policy.
For example, when considering a Muslim travel ban, a supporter of the policy was more likely to point to the fact that 72 percent of immigrants who commit terrorist attacks come from Muslim countries. But an opponent of that same policy was likely to prioritize the probability that an immigrant from a Muslim country is a terrorist is 0.00004 percent.
People are basically starting with the outcome they would like and then looking for evidence to support that outcome. When they stop and think like an expert, they can interrupt that process a little bit. ... We speculate that you first ask what the evidence shows—you start with the data and then reason from that perspective.”
Similarly, someone who supported an assault weapons ban placed more value on the fact that two-thirds of mass shootings were committed by people who owned assault weapons. An opponent of that same policy pointed to the probability that of the 12 million American adults who own assault weapons, just four committed a mass shooting.
Adopting the perspective of an unbiased expert mitigated some of this polarization, though not completely.
Importantly, the researchers found that both Democrats and Republicans placed more emphasis on probabilities that aligned with their existing views. That’s because people tend to approach these questions like “intuitive politicians” rather than statisticians, Van Boven said.
“People are basically starting with the outcome they would like and then looking for evidence to support that outcome,” he said. “When they stop and think like an expert, they can interrupt that process a little bit. When you think like an expert, we speculate that you first ask what the evidence shows—you start with the data and then reason from that perspective.”
Another important takeaway from the experiment is that people can agree about the relevance of probabilities and still have different policy stances, based on their own underlying values. For instance, even if someone recognizes that the majority of assault weapons’ owners do not commit mass shootings, they might still want to ban assault weapons.
“Someone might say that the value of reducing mass shootings even by the smallest amount is worth requiring all assault weapons owners to give up their weapons, or that any reduction in the number of terrorist attacks is worth banning all Muslim immigrants from entering the country—those might still seem like worthwhile tradeoffs to someone,” Van Boven said.
Acknowledging biases
Van Boven suggests how can we apply these new findings to our own lives: For starters, if you’re trying to be more open-minded and less biased during political discussions, try mentally stepping into the shoes of an unbiased expert or statistician. Look at the numbers and see where they lead you.
“Even in these very emotional, high-conflict partisan topics, we can and should approach it through the lens of statistical reasoning,” Van Boven said. “We really should start by asking what is the likelihood of these kinds of risks?”
Another real-world takeaway: Acknowledge that we all process statistics in this biased way, not just people on the other side of the table.
“On these worrisome, pressing issues of the day, we end up stuck in inaction because of the tendencies we all have,” Van Boven said. “It’s very easy to blame the other side and say, ‘They’re not thinking carefully and they’re being irrational and unreasonable and that’s why we can’t have sensible policies.’ But really, it’s all of us.”