Published on [Permalink]
Reading time: 3 minutes

How about not having platforms so large that their policy decisions carry this much weight?

On Substack Nazis, laissez-faire tech regulation, and mouse-poop-in-cereal-boxes:

It is bad and weird that Google, Facebook, Apple, and the rest of big tech have been left to play the role of regulator-of-last-resort. Their executives at times complain, at times correctly, that even if they have the right as private businesses to make these decisions, we would all be better off with some other entity making them.

(The hitch here, of course, is that one reason we have reduced government regulatory capacity to make and enforce these decisions is that these same companies have worked tirelessly to whittle down the size and scale of the administrative state. It has been a project of attaining great power while foreswearing any responsibility. Which is, y’know, really not great!)

But the philosophical issues are secondary to the pragmatic ones. Pragmatically, it’s really quite simple. Content moderation is costly. It is a first-order revenue sink, not a revenue-generator.

Content moderation is hard, risky work that will inevitably piss people off. None of these big tech companies wants to do it.

If your plan is to perpetually and exponentially grow your user-base so that it reaches into the hundreds of millions of users and then billions of users, you are stuck with the problem that some of those users are going to post atrocious stuff on your platform. The more of them you have, the bigger that problem is.

If you are going to do something about it, you will find yourself on a path of pouring ever increasing operating expenses into managing and making judgements about all the offensive content users are putting onto your platform. Content moderation doesn’t scale, and it doesn’t generate revenue. It is only ever going to cost these companies money and generates problems for them; of course they don’t want to do it.

For a while, they tried to wave it off by saying they’d use machine learning to do it. That tactic didn’t work, so instead, they forced low-wage workers to sift through all the garbage. In the end, they all end up falling back on these same sort of principled-sounding arguments that Substack is making, i.e., hollow statements about freedom of expression and sunlight being the best disinfectant. These arguments 1) externalize the costs of the offensive content these platforms are hosting and 2) fit perfectly with the self-serving high-school-level conception of freedom of expression that tends to be popular with the Silicon Valley tech entrepreneurs that run these platforms.

There is a way to solve this problem, and that is to not have companies this millions upon millions of users. Making a judgement about whether this post or that reply is offensive or abusive or gross is never easy, but it is made exponentially more difficult and consequential when your user base is larger than many countries.

✍️ Reply by email

✴️ Also on Micro.blog

omg.social greenfield.social another weblog yet another weblog