Leaving aside the ridiculous and ignorant suggestions from some that no internet platforms should moderate anything, many, many people seem to believe (incorrectly) that the various internet companies refuse to moderate anything because it goes against their bottom lines. We’ve heard this from a number of politicians — especially among those seeking to change Section 230, arguing (again, incorrectly) that because of Section 230 there’s somehow no incentive to moderate content on their platforms.
This is wrong on multiple levels. There is tremendous business, political, moral, and social pressure to moderate content on these platforms. When they get it wrong, they get criticized. They can lose users. And (importantly) they can lose advertisers, partners, customers and investors. There is demand for “healthy” platforms, and it’s Section 230 that allows them to experiment and moderate accordingly. That’s why it’s notable to me that both Twitter and Facebook announced the removal of what appears to be a coordinated attempt to abuse both platforms to push disinformation against protesters in Hong Kong. Here’s Facebook’s announcement:
Today, we removed seven Pages, three Groups and five Facebook accounts involved in coordinated inauthentic behavior as part of a small network that originated in China and focused on Hong Kong. The individuals behind this campaign engaged in a number of deceptive tactics, including the use of fake accounts — some of which had been already disabled by our automated systems — to manage Pages posing as news organizations, post in Groups, disseminate their content, and also drive people to off-platform news sites. They frequently posted about local political news and issues including topics like the ongoing protests in Hong Kong. Although the people behind this activity attempted to conceal their identities, our investigation found links to individuals associated with the Chinese government.
And here’s Twitter’s announcement:
This disclosure consists of 936 accounts originating from within the People’s Republic of China (PRC). Overall, these accounts were deliberately and specifically attempting to sow political discord in Hong Kong, including undermining the legitimacy and political positions of the protest movement on the ground. Based on our intensive investigations, we have reliable evidence to support that this is a coordinated state-backed operation. Specifically, we identified large clusters of accounts behaving in a coordinated manner to amplify messages related to the Hong Kong protests.
As Twitter is blocked in PRC, many of these accounts accessed Twitter using VPNs. However, some accounts accessed Twitter from specific unblocked IP addresses originating in mainland China. The accounts we are sharing today represent the most active portions of this campaign; a larger, spammy network of approximately 200,000 accounts — many created following our initial suspensions — were proactively suspended before they were substantially active on the service.
Despite common perception, both companies have put a lot of effort into discovering and stopping these kinds of efforts. Of course, none of it will be perfect, because content moderation at scale is impossible to do well. Mistakes of both false positives and false negatives are inevitable. But, if anyone thinks that modifying Section 230 will magically make companies better at this, they’re not paying attention. Adding more liability to companies over their moderation choices won’t make these efforts any better or any easier — they might just bog the companies down in lawsuits.