As the coronavirus continues to sicken Americans and so many others worldwide, people (especially those lucky enough to live in a country with strong Covid-19 vaccine access) need to be encouraged to get vaccines that can not only protect them from serious illness and death, but also stop them from spreading the virus to those around them who aren’t eligible to be inoculated — like children and people with certain medical conditions. The spread of misinformation about vaccines must be stopped.
That’s why the White House is right to question Section 230. It does need to be updated. But the exceptions to this law must be extremely narrow and focus on widespread misinformation that clearly threatens lives.
According to the Center for Countering Digital Hate, just 12 people are responsible for 65% of the misinformation about vaccines that has been circulating online. The organization found 812,000 instances of anti-vaccine content on Facebook and Twitter between February 1- March 16, 2021, which it reported was just a “sample” of the misinformation that is widely spreading.
But there’s a way to protect the openness of the Internet and the ability of social networks to operate while still cracking down on falsehoods that cause mass harm. Congress should pass a law holding tech companies responsible for removing content that directly endangers lives and achieves mass reach — such as more than 10,000 likes, comments, or shares. The definition of endangering lives should also be narrow. It should include grave threats to public health — like vaccine misinformation — or other direct invitations to cause serious harm to ourselves or others.
This requirement of this type of updated legislation would allow tech companies to focus their efforts on policing content that spreads widely (and, by the way, also makes them the most money, since social networks rely on popular content to keep people on their sites so they can earn advertising revenue). Content with the most reach and engagement is, of course, the most influential and thus potentially harmful.
The real idea here is that the prospect of financial penalties and the public relations damage that comes with lawsuits would cause social networks to step up their policing of misinformation to avoid facing suits in the first place. That would keep the onus mostly on companies to ferret out and shut down fake news that is dangerous and widespread.
If tech companies can figure out how to remove clips that harm people’s commercial interests, surely they can also figure out how to take down posts that pose threats to our lives.
Like the viruses vaccines protect us against, misinformation has become explosively contagious and deadly on social media. Congress should inoculate us against some of the worst of it while still maintaining the viability of broad, unfettered speech that doesn’t threaten lives on social media.