Problem: Network-level moderation of content on federated networks leads to fragmentation and lower total value for users

This is generally only a problem for platforms that allow posts and comments from anybody in the world by default. We use an old-fashioned concept called “permissions” and this turns moderation into a personal decision which is rarely used or even necessary.

That said, there’s nothing preventing block and report activities from federating to/from end users and anybody can individually choose whether or not to receive or act on such activities (manually or automatically) depending on their trust in the moral compass of the author. For instance, you and Joe have the same feelings about climate change and you might accept his blocks without question, but maybe not Ellen who tends to block people with different sexual orientations or preferences. Joe and Ellen may in turn may be accepting and relaying blocks from others who re-inforce their own beliefs.

In this way different communities of oppressed minorities can co-exist in the same space without anything happen at a “network level” and without them ever encountering their enemies and only rarely blocking somebody who slipped through the cracks or hasn’t yet been reported by their peers. It would basically be a fediverse implementation of “block together”. ActivityPub supports this natively - it’s a “simple matter of programming” to make it happen.

But (and this is important) censorship is in your own hands. You can delegate if you want or you can manage it yourself if you want. The site admin is only involved in moderating the “public stream” (aka TWKN) and site registrations; and on all of my own platforms they can turn either or both of these off partially or completely and spend their time living their life instead of censoring people with a different world view. That simply does not scale.

1 Like