Problem: Network-level moderation of content on federated networks leads to fragmentation and lower total value for users

David Sterry via SocialHub writes:

On the solutions side, wanted to how Freenet handled it by propagating blocks and a web of trust. Federated Moderation: Towards Delegated Moderation? - #3 by ArneBab
Seems to me webs of trust have a ton of potential on the fediverse.

The most important point about that is that you can prove using only
constants of social interaction that this structure can be scaled to
arbitrary size:

This gives an upper bound for propagating blocks and content discovery
at (on average) polling 15-30 users per minute — a value that should still
hold true at a user count in the billions.

1 Like

Thanks very much @ArneBab. This is going to be fun and probably a lot of work but honestly I’ve been thinking about web of trust + social media for a couple years and probably have some biases to shed along the way.

Continuing this effort in Scalable Moderation using a web-of-trust model and Problem: Existing moderation strategies do not scale · Issue #203 · c4social/mastodon · GitHub

I probably mentioned this before. Send Block activities to your followers and/or to the public inbox instead of or in addition to your site admin. The code on the receiver end could render these into a form with “accept”, “reject”, “ask me”, “block sender instead”, and “remember this decision” (e.g. automatically deal with future block activities from this actor in the same manner). Then you’ve got a scalable federated web of trust (at least to the limit of your available storage resources; which is a problem inherent with any solution based on blocking). The site admin’s interface works exactly the same but they might also follow some automatons which send Block activities that react to network-wide events. And this is all supported by the current ActivityPub specification.