From the Workshop on “News and Information Disorder in the 2020 US Presidential Election.”
Lisa K. Fazio
We
live in a culture where the truth has been devalued — where politicians who repeat falsehoods win
elections, where lies spread further and faster than the truth, and where
misinformation, conspiracy theories, and junk science run rampant on YouTube,
Facebook, Twitter, and many other online spaces.
This
is a huge problem. And an overwhelming one. It’s often easier to simply
document the problem, wring our hands about how difficult it is to solve, and
do nothing. In fact, over the past few years, as researchers and the media have
done a lot to document the problem, social media companies and politicians have
done very little to solve it.
Twitter
and Facebook’s actions during the recent U.S. election of more aggressively
labeling false claims, promoting quality news sources, and preemptively
debunking voter misinformation are great first steps. But many of the
interventions are being removed after the
election, and there’s a complete lack of transparency. Overall, social media
companies are still allowing, and in some cases encouraging, the spread of
false information on their platforms.
The good news is that simple, easy-to-implement solutions can help solve this problem. The issue is that none of them, on their own, will fix it. Instead of holding out for the perfect solution, we need to start implementing numerous small changes to our political system, social media platforms, and our own behavior. We need to start implementing “good-enough interventions.”
Rather
than viewing misinformation solutions as walls and barricades, we should start
viewing them as Swiss cheese. Any individual solution will have holes and weak
spots, but by stacking multiple interventions on top of each other we can
prevent the spread of misinformation. During the current COVID-19 pandemic,
public health experts have encouraged this metaphor of disease prevention. Actions
such as hand washing, wearing masks, and keeping physical distance are all
flawed and no single precaution is 100% effective; all the slices of cheese
have holes. However, the more slices of cheese, the less likely there will be a
hole through the entire stack. By stacking multiple interventions we can
achieve excellent protection.
Take
the problem of lying politicians. Spouting untruths is an ingrained feature of
the trade. But, it turns out that fact-checking organizations can be effective
at curtailing it. Fact-checking organizations such as PolitiFact have many
roles, including informing the public about what’s true or false. But their
most important role is to provide a check against politicians and to encourage
them to be truthful.
The
threat of fact-checking can change politicians’ behaviors. In one study, a random selection of U.S.
state legislators across nine states were sent a series of letters reminding
them that their campaign was being fact-checked and of the possible
reputational and electoral consequences of their questionable claims being
exposed. Legislators who received the letters were less likely to make false
claims during their campaigns.
However,
this process only works when there are consequences to being proven wrong. Recent studies in the
U.S.
have shown that those consequences are often missing. When people read fact-checks
that contradicted politicians’ statements, the fact-check helped to correct
false beliefs, but it didn’t alter readers’ attitudes towards the politician or
their voting intentions.
This
isn’t a universal quirk, however: When Australian voters are presented with similar
evidence, they decrease their belief in the false statement and reduce their
support of the politician. There are many differences between the U.S. and
Australia political system, but these results suggest deep flaws in the U.S.
system. Due to our winner-takes-all system and strong political polarization,
it is currently preferable to vote for a liar than to vote for the other party.
Electoral changes such as ranked voting may help to increase voters’ choices
and improve politicians’ accountability to the truth.
Results
such as these make it easy to think that the public does not value accurate
information. But that is not the case.
The
public does value accuracy and truth in the abstract. In a 2017 poll, only 18%
of U.S. adults agreed with the statement “Truth is overrated, lying is the
American way.” And people consciously shape their media intake to avoid lies
and misinformation. Half of the social media news consumers in a 2019 Pew
survey reported that they had stopped following a person because they were
posting “made-up news and information.”
The
problem is that when we first hear a claim, we often rely on our emotions and
how it makes us feel, rather than our prior knowledge and how we know whether
the information is true. Thus, the posts that are most likely to spread on
social media are those that are emotionally arousing.
In
particular, messages that contain emotional words with a moral implication
(fight, war, greed, evil, punish, shame, etc.) spread further than messages on
similar topics without those words. Across three studies, adding a single
moral-emotional word to a tweet increased its expected retweet rate by 20%.
But
it doesn’t have to be this way. Companies can help promote information that is
useful, informative, and accurate, rather than just information that makes us
feel good or bad. Think of how Amazon reviews include a “helpful” button to
indicate which reviews are most useful and imagine a similar feature for
YouTube videos. Rather than simply marking whether they “liked” or “disliked”
videos, viewers could rate the accuracy of the information and those ratings
could guide the recommendation algorithm.
Similarly,
platforms can encourage users to think about the accuracy of what they are
posting. When we rely on our gut feelings, we are likely to judge truth based
on unreliable signals such as how many times we’ve heard the claim. However,
numerous research studies have shown that when people pause and think about the
accuracy of what they’re reading, they are better able to notice
false headlines, less likely to share
false information, and be less impacted by the
effects of repetition.
Simple
prompts such as “Are you sure this is true?” that pop up when users try to
share a post, may be effective in reducing the spread of false information.
Instagram has been implementing a similar intervention aimed at reducing cyberbullying. When users post something
that the platform thinks might be harmful or offensive, the user sees a pop-up
asking, “Are you sure you want to post this?” A similar friction prompt from Twitter increased the proportion of
users who opened articles before retweeting them by 33%.
Valuing
accuracy can also help spread truthful information on social media. Many people
avoid correcting false information that is posted on social media because they
don’t believe that their correction will change the opinion of the original
poster. However, those corrections serve a powerful secondary purpose. While
they may not be helpful to the original poster, research has demonstrated that
they are helpful for other observers who view the interaction. These user corrections are
particularly effective when they link to an expert source such as the CDC or the American
Medical Association. So, while you might not change the mind of your cranky
uncle, by responding with accurate information you will help inform other
family members who view the interaction.
Lisa K. Fazio is an assistant professor of psychology and human development at Vanderbilt University.
Cross posted at the Knight Foundation