Problem: Network-level moderation of content on federated networks leads to fragmentation and lower total value for users

There is no dispute on this point. From the draft:

I believe your main argument is with Metcalf’s law, that the value of a network grows with size.

Again there’s no conflict. More ways to filter information is always good if there are acceptable side effects, but this statement was about a particular problem where the side effects may be underappreciated. Would you say the statement is too broad and should be scoped differently?

For a bit of background, I’m taking inspiration from other decentralized networks that are relatively promiscuous with connection and where protocol is very restrictive over what is valid. Many successful network protocols are agnostic as to the data they wrap so I’m trying to determine if this idea extends to peering when humans are closely engaged with that data.

If there’s anything I missed, please point me to it and thanks for your responses.

Hi @weex, I love the topic you started and just left a comment in the repo with some pointers to related discussion, and which I am now tempted to copy/paste here (to avoid framentation :wink: ).

I agree more with the thesis of @VictorVenema that there’s a value to bring Moderation more into the limelight: Make moderation a first-class citizen of the Fediverse and not something that happens on the fringes, out of sight of regular fedizens who use ‘free’ instances carelessly without giving thought to the work involved in upkeep.

There’s a brainstorm I started on Lemmy’s Fediverse Futures and related thread here:

There’s some great ideas in what you both are discussing above. In the ideation processes I’d like to help facilitate I imagine we keep track of all possibilities and elaborate those ideas that have most value, so that they can be taken to concrete specifications + implementations.


OT: @weex would you consider to join the Fedi Foundation initiative, instead of your own Feneas repo, to elaborate stuff? Together we are stronger and I created Fedi Foundation with the explicit purpose to be the guiding thread to SocialHub community activity. This repo can be on Fediverse Futures - Codeberg.org and be part of a solid community process. See also: Presenting Fedi Foundation: Empowerment for SocialHub community

Thanks and I replied to your comment in the repo. One challenge with this effort is I think narrowing focus on each issue. As it’s the first, it’s tempting to enumerate many other issues in any response. I’ve seen many forms of moderation over the years and there are good problems to discuss with each.

Another challenge will be adding more problems so I would suggest inspired by your lemmy post something about moderation being reactive, too expensive in admin time, underappreciated, or an afterthought, or like in some systems totally ineffective.

Once there’s a post, we can build on that as well!


OT: The project isn’t married to any particular server but if you think it’ll makes it harder to build a team who is interested to define and refine fundamental issues, then I could see the hosting choice as a problem. Either way it’s early and there is little attention. I’ll keep sharing activity wherever it is when appropriate.

1 Like

This is generally only a problem for platforms that allow posts and comments from anybody in the world by default. We use an old-fashioned concept called “permissions” and this turns moderation into a personal decision which is rarely used or even necessary.

That said, there’s nothing preventing block and report activities from federating to/from end users and anybody can individually choose whether or not to receive or act on such activities (manually or automatically) depending on their trust in the moral compass of the author. For instance, you and Joe have the same feelings about climate change and you might accept his blocks without question, but maybe not Ellen who tends to block people with different sexual orientations or preferences. Joe and Ellen may in turn may be accepting and relaying blocks from others who re-inforce their own beliefs.

In this way different communities of oppressed minorities can co-exist in the same space without anything happen at a “network level” and without them ever encountering their enemies and only rarely blocking somebody who slipped through the cracks or hasn’t yet been reported by their peers. It would basically be a fediverse implementation of “block together”. ActivityPub supports this natively - it’s a “simple matter of programming” to make it happen.

But (and this is important) censorship is in your own hands. You can delegate if you want or you can manage it yourself if you want. The site admin is only involved in moderating the “public stream” (aka TWKN) and site registrations; and on all of my own platforms they can turn either or both of these off partially or completely and spend their time living their life instead of censoring people with a different world view. That simply does not scale.

1 Like

Perhaps there are varying definitions of federation but being able to see and interact with posts and comments from anybody in the world is the broad view that I take of the concept. For example, we connect to the internet, and though it has its barriers there really isn’t another to connect to, just different entry points. Do you think of federation in a narrower sense? Do you get the sense that a significant fraction of the fediverse sees it as purposely fragmented?

How might that work? I can’t think of the primitive that most closely handles blocklists (OT really for this problem statement, but I’m curious).

What does this stand for?

Being able to see and interact with posts and comments from anybody in the world is different from having them shoved in your face.

You may or may not be familiar with Facebook. There was a time when kids would refuse to “be friends” with their parents because this allowed their mother to make embarrassing comments in their stream or post baby photos that would be seen by all their other friends. Later FB added fine-grained permissions so that you could still be friends with your mum but not let her do this kind of thing in public. Note that this is different from the Twitter model of interaction and the email model of interaction, but it is a recognised from of social interaction - only allowing interaction with friends and then perhaps not every kind of interaction with all of them. We also let you decide whether or not you are willing to accept DMs or mentions from anybody separate from comment permissions. Some people want this, some don’t.

I use a very simple permissions use case. If you’re an attractive woman with a real profile photo or that ever publishes photos of herself, you are going to be hit on relentlessly in public spaces and your inbox filled with dickpics. Every day. Forever. How would you make the fediverse tolerable or even usable by this person? That is the kind of platform I’ve built.

“Block” (as well as “Flag”) are recognised activity types. A blocklist would be a Collection of activities of type Block.

TWKN is “the whole known network”. In our sector of the fediverse we call it the “public stream”. Some call it the “network timeline”. It is generally an unregulated and unmoderated space where anything goes. Consequently we let the site admin turn it off. We also let you turn it off.

1 Like

FWIW, the link in the original post isn’t primarily about moderation and certainly not an argument against moderation in general, but more about the risk of TWKN becoming multiple fragmented networks. Maybe a split network is too implausible to worry about. Just this evening I ran across GoToSocial which provides an instance discovery mode, however I believe it can only discover whatever’s reachable from admin-provided connections.

This is great. Do you know of any implementations of such a feature? My feeling about a lot of the censorship in big tech is that I’d much rather there be APIs that would enable users to filter what they see at the edge.

+1 I didn’t know they had this level of granularity, I suppose it’s based on something like Diaspora’s aspects? Or did they just make the wall into a news feed?

1 Like

We already have a split and fragmented network due to the rampant site blocks. I will only suggest that on a system built around permissions, this is rarely needed and users are able to filter what they see at the edge as you suggest. I’ve only blocked 3 individuals over the course of 11 years and have never blocked a site. This doesn’t mean I’m overly tolerant - I’m not. But permissions do all the dirty work of keeping foul-mouthed strangers out of my timeline. The only place they sneak in is occasionally through public groups where we are both members. Then we pull out “superblock” which implements “make this person vanish from my life”.

I believe Mastodon and Pleroma both provide activities of type Block or Flag or possibly both, but these typically are only shared with and amongst admins. There’s is no technical reason why you couldn’t share such activities with your followers and let them make use of (or perhaps ignore) these activities rather than just restricting this ability to admins. I would have built such a “block together” functionality already using these activities (it isn’t rocket science and the protocol supports it) - but my projects don’t actually need it.

Diaspora’s aspects is just a fancy name for mailing lists. They don’t provide or enforce any permissions per se, they just restrict posts in the current stream view and set the post audience to those in the given list. My own fediverse history pre-dates Diaspora slightly and we’ve been providing fine grained permission control since about 2010 and the full permissions framework was completed around 2012. We used Facebook’s permissions as a usage model initially, but have since gone beyond that. This lets you decide who can send you their posts and comments, who can comment on your own posts, who can see your friends and followers or even the relevant totals, who can access your posts and who can access your photo albums and personal cloud storage resources. This list of rights is extensible so when we added a wiki to the project, we also added a ‘view_wiki’ and ‘edit_wiki’ permission. You can set any of these to a list (aspect) or enumerate the allowed/disallowed folks individually, or make them public. There are other possibilities as well, such as everybody on this site, or connections only, or mutual connections, or only people using a specific protocol. It’s all in your hands. If you lack the relevant permission, we refuse the action.

1 Like

Summarized a couple relevant points from this discussion and gave some guidance on next steps here.

The most controversial part of this probably has to do with how one values a network. So the question comes to mind… Are there any papers that study the fediverse with similar methodology to how the big tech platforms have been studied in relation to mental health, misinformation, etc. ?

Something which I thought was common knowledge, but may not be so widely known - is the actual realisation of this thing called “community” in the real world. People think they are building a “global and universal community”, but such a thing has never existed in the real world. Not even on Twitter or Facebook.

Every community has boundaries, and those boundaries actually define the community. As in - who is “in” and who is “out”. In this sense the use of sites as a community hub with site blocking to keep out the “out” mob was a natural evolution. It is analogous to medieval Europe after the collapse of empire. You absolutely need to build walls and fortifications and defend them. Even empires need to do this, just on a much larger scale. In my case, I’m putting the walls around the individual through the use of permissions, so that they have personal control who is in and who is out and in doing so build their own personal community. It’s just a different level of granularity than that offered by using the instance or site as a natural boundary, and lets them build their community as more of a matrix of individuals than an alliance of conglomerates (sites/instances). Because even good sites can be infiltrated by bad actors and destroy the community from within.

One can try and enforce community boundaries at the network level, but this doesn’t work very well under an open protocol that anybody can join and where nobody can actually de-platform you and whether you like it or not - nobody is actually in charge. So Facebook and Twitter style moderation won’t work here. And site level restrictions are meaningless in my world where accounts are nomadic and can show up in different domains at any time with all their friends and followers and content intact.

At the end of the day we’re all solving exactly the same problem, just at different levels of the org chart.

It is funny that you say that, because I wholly agree with this notion on the importance and structure of community, yet for a possible paradigm on automating them for the Fediverse I’ve deliberately chosen “Community has no Boundary”. But what this refers to is how society is like a ‘bubbling foam of communities’ of all different kinds, intersecting and touching, popping into and out of existence and with people participating in them in numerous dynamic conglomerations. A person’s membership to communities and their roles that relate to them are uniquely theirs and, personal.

So community in the real world is very organic, and it is a complex concept. There’s a world of sociology behind it. I want the “Community has no Boundary” paradigm to be something that tries to capture some of that in a way that lends itself to translation to the online world too. Currently across the internet - if you look at the Fediverse, but also most other platforms - we still have quite shallow, rigid notion of what Community entails.

(The social media molochs also try to capture community concepts on their platform, with e.g. FB Groups, but their incentives to maximize engagement thwarts the effort. I won’t go into that here)

This ‘shallowness’ is understandable, because as we automate things we strive to keep complexity at bay by simplifying our abstractions, and also - until the emergence of the giant traditional social media platforms - we could afford to deal with a very limited scope to defining community. Many platforms can be considered as facilitating either a single community, or at most a simple group of communities. Membership most often means you are either in or out, and membership level mostly defined as a straighforward set of permissions rather than intricate roles.

For the Fediverse specifically there are a few apps that offer Groups support, where the group is a form of community, and other than that we perceive instance membership to be community related. In other words your user account on a server makes you implicitly a community member of the instance. We only ‘officially’ support a very small set of relationships that a member (Actor) has to a community, and we use app-dependent “tricks” to do so. The member relationships are expressed as either permissions / privileges on an actor account (e.g. member, mod, admin), or determined by collections on an actor (e.g. 'if you are in a Group’s “followers” collection, then you are a member of the group"). There may be more ways in which ‘community relationships’ are currently expressed but AFAIK no implementation thus far uses as:Relationship for that.

My interpretation of the above may well be incomplete (lack of knowledge) or perceived as incorrect. Especially when measured against current reality and technical realisations. Fact is that at this moment there is no meaningful common understanding of the concept of Community. There’s no widely agreed upon Community domain, and no shared vocabulary of terminology to use.

Part of the “Community has no Boundary” investigation is trying to come up with a minimum ontology that can be standardized as a possible AP extension to adopt. It need not be the only one that goes into group structures, memberships or even community concepts, but it can be part of a pattern library of extensions to emerge from which app developers can choose the most appropriate ones that fit their use case.

And ideally this ontology will be applicable beyond fediverse, to e.g. Solid applications, and @bhaugen + @lynnfoster Valueflows (who already modeled their Agent model based on AS specs), or Matrix for that matter, etcetera. So that interoperability between these different ecosystems remains relatively straightforward. A more universal applicability would be a huge boon to the decentralized web at large.

I think this is quite do-able, and - if care is taken to focus on the smallest common denominator - need not be overly complex to adopt in basic form. More complexity can be added by supporting additional extensions on top of that.


PS. @weex I am sorry that this response to Mike’s post is once more wide-ranging and somewhat off-topic’ish to the original title. You might extract useful stuff to the gitlab issue, or I can move this to the community topic and just leave a reference here, or just leave as-is.

Technology does not create a social network. People do. As far as I’m concerned, the protocol is irrelevant. We’re just sending messages back and forth. The only thing that’s interesting is that my fediverse - which is dramatically different from yours; can co-exist in the same space. And that is the solution to the problem referenced in the OP. We all can exist - in the same space. The only thing that keeps us from killing each other is that I can ignore you and you can ignore me. This is all a protocol or project needs to support. I’ll only suggest that If your software does it at a site level or network level or protocol level, I’ll just laugh. Those jedi mind tricks don’t work here. For us, it’s personal.

1 Like

Sure. It obviously always boils down to the people. Technology is merely supportive to that. But protocols can be improved so they offer better support. I really liked how you described moderation in posts above, quite appealing. And it can be combined with community concepts I outlined. If feel that - however they are implemented - a better support of community concepts online is crucial for social networking to improve and evolve.

When you say that ‘we are just sending messages back and forth’… when you mean that in a technical sense, it is true, but also trivialising the challenge we have in a heterogenous decentralized network. If you mean in a micro/macro-blogging context, then its true as well, but covers only a tiny sliver of the use cases and potential of the protocol. Maybe its a pipedream of mine, but I would love to see app silo’s disappear as more task-oriented services arise.

(Have to say I am not too optimistic on the progress here, atm :slight_smile: )

Update: For the federated software I have in mind I’d need to model something of a Governance domain on top of Community, which goes slightly beyond moderation, but it may be defined such that you can define your moderation policies with it. Don’t know really how it would look like though.

Metcalf’s Law relates to the number of connections between nodes in a network, and goes as n^2. This is taken as a proxy for “value,” but a connection within the network does not invariably generate positive social good among its participants, which is clear to anyone perusing the social web and clearly related anecdotally here by @VictorVenema.

The “Law” would be better phrased as relating to network potential, since “value” means different things to different people. To many, it would be of great “value” to their social web experience to have an entire Nazi clique severed from the network.

I’m a newbie to the Fediverse, and really enjoying reading these discussions, especially these regarding moderation, which is an activity that is underrecognized in creating value for network participants. And does so by restricting connections! Clearly the firehoses of awfulness on the centralized social web are not unmitigated good, which is how I found myself here :wink:

P.s. I know I’m saying mostly obvious stuff in this post, but wanted to join the discussion all the same. Thanks for humoring.

4 Likes

About the whole last paragraph and

Though hard to quantify, it’s hard to imagine the value of solving this problem being lower than hundreds of billions of dollars globally.

This is the idea of an “EU funded fediverse”: That governments see the values you described and acknowledge that this is a portion of tax money. Recently talked with the EU Tech Lead and think, we should lobby for that.

On the solutions side, wanted to share how Freenet handled it by propagating blocks and a web of trust. Federated Moderation: Towards Delegated Moderation? - #3 by ArneBab
Seems to me webs of trust have a ton of potential on the fediverse.

1 Like

David Sterry via SocialHub noreply@socialhub.activitypub.rocks writes:

On the solutions side, wanted to how Freenet handled it by propagating blocks and a web of trust. Federated Moderation: Towards Delegated Moderation? - #3 by ArneBab
Seems to me webs of trust have a ton of potential on the fediverse.

The most important point about that is that you can prove using only
constants of social interaction that this structure can be scaled to
arbitrary size:

This gives an upper bound for propagating blocks and content discovery
at (on average) polling 15-30 users per minute — a value that should still
hold true at a user count in the billions.

1 Like

Thanks very much @ArneBab. This is going to be fun and probably a lot of work but honestly I’ve been thinking about web of trust + social media for a couple years and probably have some biases to shed along the way.

Continuing this effort in Scalable Moderation using a web-of-trust model and Problem: Existing moderation strategies do not scale · Issue #203 · c4social/mastodon · GitHub

I probably mentioned this before. Send Block activities to your followers and/or to the public inbox instead of or in addition to your site admin. The code on the receiver end could render these into a form with “accept”, “reject”, “ask me”, “block sender instead”, and “remember this decision” (e.g. automatically deal with future block activities from this actor in the same manner). Then you’ve got a scalable federated web of trust (at least to the limit of your available storage resources; which is a problem inherent with any solution based on blocking). The site admin’s interface works exactly the same but they might also follow some automatons which send Block activities that react to network-wide events. And this is all supported by the current ActivityPub specification.

1 Like

Proposed solutions:
Use an ActivityPub Question activity to bring moderation decisions to the community for discussion.
Marius Orcsik: "I spent some time thinking of a democratic proces…" - \m/ Metalhead.club \m/

2 Likes