Understanding YouTube’s Disinformation Ban



YouTube took it a step further last week with a fairly broad ban on videos that question the effectiveness or safety of approved vaccines, including those against measles. Maybe these rules make sense to you. But they can also sound like an attack on expression – and an insult to our intelligence. Most people who see YouTube videos (falsely) claiming that an animal dewormer medicine cures the coronavirus won’t drink Fido pills, and most people who post their concerns about vaccine side effects aren’t anti-vaccine fanatics. Are we not able to speak freely on the Internet and form our own opinion? Isn’t it counterproductive and anti-American to declare certain discussions banned?

There are no easy answers to these questions. But I want to share how my perceptions have changed a bit after speaking with Brendan Nyhan, a professor at Dartmouth College who studies misperceptions about politics and healthcare. Dr Nyhan gave me a different way of thinking about disinformation online: it’s not about you. Dr Nyhan suggested that we view Internet Company Rules as being designed for the small number of people who strongly believe or are inclined to believe in things that are patently wrong and potentially dangerous. The conversation resonated because it came to something that bothers me about the catch-all term “disinformation”. It conjures up a world in which everyone is either a neo-Nazi, an anarchist, or a con man selling fake health potions – or vulnerable to being duped by them.

We know this is hogwash. But Dr Nyhan said it was crucial that we have rules on the internet for the extremes of both the speaker and the listener. “A lot of people will be exposed to misinformation, and it will have no effect,” Dr Nyhan told me. “But if even a few people believe in powerful false claims like an illegitimate election or this vaccine causes autism, then that might call for a more aggressive approach.” Dr Nyhan isn’t saying popular websites should restrict all discussion that includes extreme or unpopular views. (He wrote that the types of online limits on Covid-19 talks shouldn’t apply to most political expressions.) But for a selection of high-stakes issues that could lead to real-world damage , Internet companies may need restrictive rules. Internet companies have also encouraged people to think carefully about what they read and share, without forbidding certain types of conversations. Dr Nyhan recognizes that it is difficult to decide which topics are high stakes, and he fears that a handful of internet companies have become so influential that they dictate public discourse and often misapply their policies.

Above all, Dr Nyhan dismisses two overly simplistic ideas: that the average person is likely to fall in love with any wacky thing they read online, and that these wacky things online pose little risk. “We need to focus more on how platforms can enable an extremist minority to foment harm and not on how the average person might be brainwashed by content they have viewed multiple times,” he said. said Dr Nyhan. “We should think about people who consume a large amount of hateful or extremist content on YouTube, or anti-vaccine groups that don’t reach a lot of people but could do a lot of harm to the people they reach. “Not all the things that interest us or that question us are disinformation. We can’t just, you know, talk about stuff on the internet? Won’t that be good? Dr Nyhan’s answer is basically , yes, that will probably be fine for most of us – but we have to think about the margins. And on rare occasions, that could mean sacrificing the ability to immediately say absolutely anything online in order to protect us all.

Ovide is a technical writer at NYT © 2021

The New York Times


About Author

Leave A Reply