Facebook, YouTube and Twitter have long no-no lists to limit the information on their sites that they deem misleading about the coronavirus. YouTube has gone further with a fairly broad ban on videos that question the effectiveness or safety of approved vaccines, including those against measles.
Maybe these rules make sense to you. But they can also sound like an attack on expression – and an insult to our intelligence.
Most people who see YouTube videos (falsely) claiming that an animal dewormer medicine cures the coronavirus won’t drink Fido pills, and most people who post their concerns about vaccine side effects aren’t anti-vaccine fanatics. Are we not able to speak freely on the Internet and form our own opinion? Isn’t it counterproductive and anti-American to declare certain discussions banned?
There are no easy answers to these questions. But I want to share how my perceptions have changed a bit after speaking with Brendan Nyhan, a professor at Dartmouth College who studies misperceptions about politics and healthcare. Nyhan gave me a different way of thinking about misinformation online: it’s not about you.
Nyhan suggested that we view Internet Company Rules as being designed for the small number of people who strongly believe or are inclined to believe in things that are patently wrong and potentially dangerous. Stay with me.
The conversation resonated because it came to something that bothers me about the catch-all term “disinformation”. It conjures up a world in which everyone is a neo-Nazi, an anarchist, or a con man selling fake health potions – or vulnerable to being duped by them.
We know this is hogwash. But Nyhan said it was crucial that we have rules on the internet for the extremes of both the speaker and the listener.
“A lot of people will be exposed to disinformation, and it will have no effect,” Nyhan told me. “But if even a few people believe in powerful false claims like an illegitimate election or this vaccine causes autism, then that might call for a more aggressive approach.”
Nyhan isn’t saying popular websites should restrict all discussion that includes extreme or unpopular views. (He wrote that the types of online limits on COVID-19 talks shouldn’t apply to most political expressions.)
But for a selection of high-stakes issues that could lead to real-world damage, internet companies may need restrictive rules. Internet companies have also encouraged people to think carefully about what they read and share, without prohibiting certain types of conversations.
Nyhan acknowledges that it is difficult to decide which topics are high stakes, and he fears that a handful of internet companies have become so influential that they dictate public discourse and often misapply their policies.
Above all, Nyhan rejects two overly simplistic ideas: that the average person is likely to fall in love with any wacky thing they read online, and that these wacky things online pose little risk.
“We need to focus more on how platforms can enable an extremist minority to foment harm and not how the average person might be brainwashed by content they have viewed multiple times,” he said. said Nyhan. “We should think about people who consume a large amount of hateful or extremist content on YouTube, or anti-vaccine groups that don’t reach a lot of people but could do a lot of harm to the people they reach. “
Honestly, I hate it. Why should sites like YouTube and Facebook be designed to defuse the worst risks of conspirators and racists? What about the parent who worries about the side effects of their child’s measles vaccine or your coworker wondering about the Arizona recount? Not all the things that interest us or that question us are misinformation. Can’t we just, you know, talk about stuff on the internet? Won’t it be good?
Nyhan’s answer is basically, yes, it will probably be fine for most of us – but we have to think about the margins. And on rare occasions, that can mean sacrificing the ability to immediately say absolutely anything online in order to protect us all.