by

Kevin Drum, writing about the dilemma he faced moderating comments when he had his own blog:

The same trilemma affects huge social media platforms:

  • A system that’s big and effective (i.e., lightly moderated by the platform company) will inherently be inconsistent.
  • A system that’s big and consistent will inherently require huge resources from the platform company and therefore won’t be > effective.
  • A system that that’s effective and consistent requires too much human intervention to ever become very big.

Most people don’t get this, and therefore expect too much from platform companies like Twitter and Facebook. These companies can use automation to do a lot of the job, but automation isn’t even close to perfect yet. So what do you do? If the automation is too tight, it will eliminate innocent comments and everyone will scream. If the automation is too loose, it will let lots of hate speech through and everyone will scream. If you ditch the automation and use humans, you’ll go bankrupt—and anyway, human moderation is far from perfect too.

Here’s a thought: Don’t let your platform grow so huge that you can’t handle the task of managing the assholes who use it.

The problem with Facebook and Twitter is not that they haven’t found the right balance between size, effectiveness, and consistency. The problem with Facebook and Twitter is that their business models depend on an enormous and ever-growing user bases.

They do a lot of hand-waving about AI and machine learning, but it is pretty clear that even if these capabilities might exist in the future, their current state is nowhere near adequate to effectively moderate user interactions on these platforms.