Federated Moderation: Towards Delegated Moderation?

Introduction

In Improving fediverse culture and social behavior along the way I introduced two ideas, that are I think interesting enough to warrant a separate thread. The brainstorm will start in the Lemmy Fediverse Futures ideation space, and elaborated here (if there’s interest). See Lemmy: https://lemmy.ml/post/60475

Note: There’s ongoing research on Moderation by @Audrey and @robertwgehl. See:

(The sections below will be duplicated to Lemmy brainstorming post)


Moderation on the Fediverse

Right now when people install federated server instances of any kind that are open for others to join, they take on the job to be the instance admin. When membership grows, they attract additional moderators to help with maintenance and assuring a healthy community.

I haven’t been admin or mod myself, but AFAIK the moderation work is mostly manual, based on the specific UI administrative features offered by a particular app. Metrics are collected about instance operation, and federated messages come in from members (e.g. Flag and Block). There’s a limited set of moderation measures that can be taken (see e.g. Mastodon’s Moderation docs). The toughests actions that can be taken are to blocklist an entire domain (here’s the list for mastodon.social, the largest fedi instance).

The burden of moderating

I think (but pls correct me) that in general there are two important areas for improvement from moderators perspective:

  • Moderation is very time-consuming.
  • Moderation is somewhat of an unthankful, underappreciated job.

It is time-consuming to monitor what happens on your server, to act timely on moderation request, answer questions, get informed about other instances that may have to be blocked.

It is unthankful / underappreciated because your instance members take it for granted, and because you are often the bad guy when acting against someone who misbehaved. Moderation is often seen as unfair and your decisions fiercely argued.

Due to these reasons instances are closed down, or are under-moderated and toxic behavior can fester.

(There’s much more to this, but I’ll leave it here for now)

Federating Moderation

From the Mastodon docs:

Moderation in Mastodon is always applied locally, i.e. as seen from the particular server. An admin or moderator on one server cannot affect a user on another server, they can only affect the local copy on their own server.

This is a good, logical model. After all, you only control your own instance(s). But what if the federation tasks that are bound to the instance got help from ActivityPub federation itself? Copying from this post:

The whole instance discovery / mapping of the Fediverse network can be federated. E.g.:

  • A new server is detected
  • Instance updates internal server list
  • Instance federates (Announce) the new server
  • Other instances update their server list
  • Domain blocklisting / allowlisting actions are announced (with reason)

Then in addition to that Moderation Incidents can be collected as metrics and federated as soon as they occur:

  • User mutes / blocks, instance blocks (without PII, as it is the metric counts that are relevant)
  • Flags (federated after they are approved by admins, without PII)
  • Incidents may include more details (reason for blocking, topic e.g. ‘misinformation’)

So a new instance pops up, and all across fedi people start blocking its users. There’s probably something wrong with the instance that may warrant blocklisting. Instance admin goes to the server list, sees a large incident count for a particular server, clicks the entry and gets a more detailed report on the nature of said incident. Makes the decision whether to block the domain for their own instance or not.

Delegated moderation

When having Federated Moderation it may also be possible to delegate moderation tasks to admins of other instances who are authorized to do so, or even have ‘roaming moderators’ that are not affiliated to any one instance.

I have described this idea already, but from the perspective of Discourse forums having native federation capabilities. See Discourse: Delegating Community Management. Why would you want to delegate moderation:

  • Temporarily, while looking for new mods and admins.
  • When an instance is under attack by trolls and the like, ask extra help
  • When there is a large influx of new users

Moderation-as-a-Service

(Copied and extended from this post)

But this extension to the Moderation model goes further… we can have Moderation-as-a-Service. Experienced moderators and admins gain reputation and trust. They can offer their services, and can be rewarded for the work they do (e.g. via Donations, or otherwise). They may state their available time and timeslots in which they are available, so I could invoke their service and provide 24/7 monitoring of my instance.

The Reputation model of available moderators might even be federated. So I can see history of their work, satisfaction level / review by others, amount of time spent / no. of Incidents handled, etc.

All of this could be intrinsic part of the fabric of the Fediverse, and extend across different application types.

There would be much more visibility to the under-appreciated task of the moderator, and as the model matures more features can be added e.g. in the form of support for Moderation Policies. Like their Code of Conduct different instances would like different governance models (think democratic voting mechanisms, or Sortition. See also What would a fediverse "governance" body look like?)


Note: I highly recommend to also read the toot thread for this topic with many people responding with great insights: https://mastodon.social/web/statuses/106059921223198405

2 Likes

The challenge with moderation is that disrupting communication often scales better than individual blocking.

In the Freenet project (where centralized moderation simply is no option) the answer was to propagate blocking between users in a transparent way. That way blocking disruptors scales better than disrupting. For more info see: The Freenet Web of Trust keeps communication friendly with actual anonymity | Zwillingssterns Weltenwald | 1w6

(copied from the thread at ArneBab: "@tio@social.trom.tf @humanetech@mastodon.social @…" - Die Heimat für Rollenspieler — there is more in that thread, but the current limits for new users prevent me from posting it here)

3 Likes

You provided some great resources. Welcome to SocialHub @ArneBab, nice to have you here. I would also like to quote your other toot:

If you want to try this, there are two steps: First the current state in Freenet: plugin-WebOfTrust/OadSFfF-version1.2-non-print-edition.pdf at 4fdcc56558b4a64facc9b7ab25d695765356ca69 · xor-freenet/plugin-WebOfTrust · GitHub

Then the optimizations needed so this scales to arbitrary size: A deterministic upper bound for the network load of the fully decentralized Freenet spam filter | Zwillingssterns Weltenwald | 1w6

Here’s some data if you want to test algorithms: The Freenet social trust graph extracted from the Web of Trust

And some starting code of a more generic prototype for faster testing: ~arnebab/wispwot - sourcehut hg

CC: @Audrey and @robertwgehl

1 Like

In today’s SocialCG meetup FediHealth by @tuttle was demonstrated. I did not attend, but the topic seems very much related to this brainstorm thread:

1 Like

Hi,

Thanks for the invitation to participate in this thread.

It was a pleasure to be at the last SocialCG meeting. The work being done to define the technical side of Fediverse is so very important.

I also think that this Fediversity category here on socialhub where the non-technical side of the Fediverse is being considered is just as important, perhaps even more so.

We are Geeks, most of us anyway, and we like tinkering. I’ve been playing with tech since I was a child. Tech is fun. You connect this wire to that resistor and the LED lights. You change one value for another, and something else happens. Cool! How many hours have I spent doing such things? Half a lifetime, and I love it! :nerd_face:

But as the years have gone by, I’ve realized that tech for tech’s sake is not generally a positive thing for humanity. Building technology just because “you can”, is not reason enough to build it.

ActiviyPub for the sake of ActivityPub and interconnecting people is not a reason enough to develop AP. We are already hyper connected via proprietary and/or open standards, yet we are more atomized than ever. I think we know that AP has to mean more than just a technical challenge, more than just tickling our geekness.

Tech and software are all the rage. Now, after covid, even more so. Everyone is promoting it. Businesses are booming and you can get grants to develop. But while the focus on developing more technology is huge, the focus on building a better society with that technology seem inversely proportionate.

Advocating for ‘technology for good’ is hard work (all of us who have been involved in the GNU project know what I mean), but it’s worth it. We’ve come far and matured along the way. The importance we place on our Social contracts is proof of that. So we are in a good place, more prepared than ever. :slight_smile:

So IMHO, as custodians I think it would be a good idea to define some basic principals to help guide us to nurture the Fediverse so that it becomes something that improves society instead of being just another media meat market.

About moderation: Just a few years ago when I wanted to hail a Taxi, I would stand on the street, raise my arm, and yell Taxi! Now you do all that with an app. Really convenient (maybe), but where is the humanity? Lost. In the same sense I feel that node moderation needs to be based on human contact to keep it, well, human. So perhaps that could be an example of a basic principal. Manual intervention before automatic resolution.

The Fediverse is an opportunity of a lifetime!

Pleased to be here.
Chris.

3 Likes

So very true. That is one of the distinguishing factors of the things we do for the fediverse. Completely different incentives and a much better culture right from the start. A unique opportunity. I wrote about this in more detail at Humane Tech Community in What Makes a Humane Technologist?.

Part of the TL;DR I guess is this:

Hypercapitalism stands in direct opposition of Humanity!

And also the insight that:

  • Humane technology depends on Humanity and to create it one should embrace and practice this virtue in daily life.

  • Technology mirrors the society that creates it, and for it to be humane we must weave a social fabric that fosters humanity.

There’s two slogans we recently adopted at SocialHub applying to the Fediverse:

  • United in Diversity
  • Social Networking Reimagined

Note the ‘social networking’ in the second bullet: The real-world concept is meant here. While our online technology can unite us, it is also just an abstraction of that.

With regards to moderation I feel that making it a first-class citizens of the Fediverse itself, will ensure proper appreciation and visibility of the work involved to keep us on track of these visionary statements. Emphasize that it is humans all the way.

Well, when I suggested the example “Manual intervention before automatic resolution.” as a guide to protocol and social development, I am talking about something concrete.

“United in Diversity” and “Social Networking Reimagined” are nice words, but they are not specific guide lines.

1 Like

So IMHO, as custodians I think it would be a good idea to define some basic principles to help guide us to nurture the Fediverse so that it becomes something that improves society instead of being just another media meat market.

I do not really believe there is a place for such generalized “custodians” in a decentralized network. I admire Fediverse enormous social and political potential, but it can only do any good to the people if it is rigorously people-driven.

As I wrote here, people migrate to Fediverse, carrying the burden of toxic social behaviours. And that includes “us geeks”, admins, devs, moderators and other privileged users. We are not immune to toxicity of state-supported surveillance capitalism. I may accept delegated moderation as a short-term solution, but I am definitely against accepting it as a long-term rule, ingrained in the core technology.
My strong belief is that we should go the way of social permaculture, “designing beneficial relationships” and creating an environment that nurtures them.

From the political perspective, which is crucial for me, moderation should be initiated by users themselves and an admin should be solely a tool of community, executing whatever decisions they take.

I just rediscovered my old piece on this topic. After three years I still believe this perspective is valid, so I am linking it here as my voice in this conversation.

We do not operate in vacuum

Let us be cold sober about the political situation. We all suffer from surveillance capitalism, and we believe our immediate adversary is cybermedia corporate complex, vacuuming our data, profiling us and selling it all to anyone, including state and interstate actors.
But there is no supply without demand. All state-ish bodies, EU included, are power hungry and are always happy to know more about their subjects – for their own good, safety and convenience. From one side we see growing interest in legal tools like #TERREG (let alone state-level attempts) and “irregular migration control” (I am close to the topic since 2015, and I consider it probably the dirtiest game EU ever played).
From the other side we see growing interest in “helping users keep quality of the news”, through projects like #EUnomia that I only started learning about (but I am already concerned a lot).

I believe these two aspects need clear, public and serious discussion within Fediverse circles, before implementing any tools that could support disempowerment of the people of fediverse. The internet got tamed once. Perhaps right now we have a chance to rewild a part of it, or at least not let the powers that be take control again. I would not be happy to see this opportunity missed.

2 Likes

Sure, and you are right. But I was just being introductory based on your introduction and mention of ‘basic principles’ :blush:
In summary I think we fully agree on the role of technology as merely supportive and adding value.

In follow-up to you, @pskulski and also on fedi discussion, like here by Bob Mottram where I reacted:

I am not seeking technological solutions for the sake of technology in this brainstorm. Forget the tech, but think of the tech as an abstraction of real-world social relationships. We have modeled the interaction of Fedizens (in the microblogging domain), and we also know that a strong point is the way fedi is moderated on an instance-by-instance basis by real humans. This is a discerning factor wrt traditional social media.

We also know that the current Moderation toolset is not enough. That toxicity also creeps onto the fedi, and instances close down, people moving back to the Twitter et al. See also my breakdown on “Fediverse is Crumbling” article.

Better moderation is crucial. And if in future we were to move towards more task-oriented fediverse - where application and instance boundaries are ‘pushed’ to a lower layer of the techstack - its nature will fundamentally change.

Yet Moderation is not part of Fediverse at all. It exists in the shadows, separately on a different plane, out of sight for most fedizens who depend on it. People take it for granted too: “I just move to this instance and everything will be provided to me”. And when moderation occurs the moderators are seen as an abstract entity, not as humans who do the unthankful job to keep an instance healthy… “InstanceXYZ (them) censured / cancelled me (us)”.

Delegated Moderation provides transparency and openness: “I am a fedizen and a moderator who helps keeps fedi safe and healthy”. That’s it, visibility and recognition of their work. In no way mods will become god-like creatures moderating all-over-the-place. There’s the same control that exists now.

Fantastic article. And once again we see concepts popping up that are ongoing discussion on this forum: Community and Governance.

I won’t go into all the ins and outs, because this post is already long. Just some considerations:

  • Federated moderation: Brings moderation activity out into the open (like an audit log), so that the entire community and beyond can be informed in what ways it is taking place. And it can be helpful to avoid filter bubbles caused by ‘Shadow Moderation’.

  • Delegated moderation: Brings moderation as an important responsibility for any community out in the open, and facilitates having open and rotating roles doing this work: Distribution of authority based on democratic principles.

Both these ‘features’ of the future Fediverse, built on top of Community and Governance domains, are very much in line with the guidelines set out in the Tyranny of Structurelessness.

Government involvement

1,000% agree.

I want to address this separately. It is not directly on-topic for this thread, but a much broader subject area. A broad discussion must indeed be started. There’ll be many topics, meetups and probably projects needed to steer this well.

At SocialHub we created the #meeting:fediverse-policy Special Interest Group, but haven’t done much with it yet. I invite anyone interested in the discussion to join this group, so we can set things in motion.

Are you a mod on an instance, its good to get lived expirence, you can come help out on https://activism.openworlds.info regular right-wing attacks/spam on this one - we also have a growing nutty/senses issue that is a challenge to know what to do with. Or for peertube http://visionon.tv which has its own issues due to the way peertube federates, we do not have open posting of video due to this.

Working though lived expirence.

I start to see what motivates you on this one. you are outlining the 4th open - openprocess

In mastodon the is admin tick box to publish the federation mod outcomes on the instance, think its unticked by default. the is also a Audit log that is private to mods and admins would be nice to have a tickbox to publish that two?

in peertube the is a admin interface to see the list of mod rules but nothing public, think the might be a plugin that alowes you to make this public

visionOntv is basic info.

peertube I think is missing the audit log - and has the info scattered around the admin backend.

Yes, indeed. If you want to take federated moderation a step further, then “publishing” can mean more than a toot. It may use an agreed-upon AP extension vocabulary, so you can send more fine-grained metrics and information and process them in multiple different ways. Also these action could then be federated to other instances, so they can learn what is happening elsewhere on the fedi (a transparency thing, making moderation a first-class citizen).

Oww, I  :heart:  love the discussion that @macgirvin and @weex are having on Moderation from this comment onwards: Problem: Network-level moderation of content on federated networks leads to fragmentation and lower total value for users - #8 by macgirvin

There’s a lot in the clear description that Mike is giving that warrants further elaboration and documentation for this Federated Moderation brainstorm. Adding this comment as the TODO for that :slight_smile:

Note: Cross-referencing to the topic Activities for Federation Application? where @nutomic refers to 2 #software:lemmy open issues for adding allowlist-based federation between instances, where a request to federate must be sent between them.

I added a comment to the Lemmy issue:

[…] I’d like to mention a brainstorming topic I created earlier, namely Federated Moderation: Towards Delegated Moderation? (also cross-posted to Lemmy):

Federated Moderation deals with making common admin and moderation tasks first-class citizens of the Fediverse.

And not app-specific, whenever possible. In addition to sending requests to federate between instances, the Federated Moderation domain (a design pattern: AP vocab extension + rules) would also include:

  • Announcing instances to the fediverse, so there’s no need for manual maintenance of app-specific lists and instance-collection sites that aren’t integral part of the fedi itself.
  • Discovery of instances and their metadata as described above (app, federation type, CoC rules, description, etc.).

Note: The mechanism for announcement and discovery closely resembles how the Murmurations protocol by @olisb @geoffturk et al works. This protocol allows you to define JSON-LD profiles to fill in as templates, e.g. a “Federation Profile” and then publish them from an instance (in the case of Murmurations to a centralized index server, where other apps/server can subscribe to. PS. I created a topic to Federate Murmurations protocol, and MurmurationsProtocol#30 in their tracker)

GoToSocial has an open issue in their tracker that is related to this topic:

Update:

An interesting comment added by @not-not-the-imp:

Someone wrote a paper on this and some software that implement something like federated moderation
The short version: alexander cobleigh - cblgh.org
The full paper (4MB, 100 pages): https://cblgh.org/dl/trustnet-cblgh.pdf
The software implementing the ideas: GitHub - cblgh/trustnet: a flexible and distributed system for deriving, and interacting with, computational trust

cblgh’s trustnet would be a contender, if not for the final mechanism, at least to learn and refine GTS acceptance criteria for a federated moderation system.

Also interesting for @weex in context of Scalable Moderation using a web-of-trust model.

We’ve added to Ecko (Magic Stone’s Masto fork) import and export of blocklists. These are just CSV files where the imported one could be just domains one per line while the exported form has more data about what kind of block is done, whether the domain is obfuscated in the about page, etc.

We also put in a CLI command to grab these blocklists from a remote URL so that can be automated through cron for example.

Blocklists automation was asked for so we put it in and look forward to seeing how it works. It needs and is getting some testing but the more the better.

I think a better way will come through the thread you linked @aschrijver about Scalable Moderation. To that end I implemented a prototype of the web of trust algorithm from Freenet and we’re going to try and connect that to Ecko soon. I really hope we get some usage on web of trust because that can work on the granularity of individual accounts and provides a competing idea to the blocklist/allowlist one.

1 Like

Just posting, so I won’t forget what I found in this toot, and maybe @marius will follow-up in another topic later, as this is still a draft:

Moderation on top of the ActivityPub vocabulary

@mariusor posted:

I managed to get working a workflow for moderators and moderation using vanilla #ActivityPub client to server. I need to put it down on paper before I forget it

[and followed in another reply]

I won’t pretend that this is a good solution. But I wanted it to be “a solution” that doesn’t require any additions to the ActivityPub vocabulary. With extensions the “tags” can receive some custom, meaningful, types, and it would still mostly work.

Hey @aschrijver , thank you for bringing this to the attention of more people.

I’m here if anyone has questions. After I get through a couple iterations on this solution, I’m planning to formalize that wiki page into a FEP so maybe more projects can try it out.

2 Likes

Curious where the community stands (as of end 2022) on the technical issue of orchestrating / supporting fediverse moderation, specifically how to support it as a first class citizen using e.g., a combination of automated (algorithmic) tools, supporting “augmentation” of human moderator capabilities and providing transparency to partifcipants as to what is happening moderation-wise .

Motivation for the question is the renewed interest in the fediverse (due to the well known events in adtech driven platforms) but the spotlight is not always a blessing. As the numbers of users and attention increases, the ability to offer an alternative experience will be challenged and bad actors will probe for every weak link in the architecture. ActivityPub per se does not seem to address this issue at all, yet its arguably a critical piece.

Curious what you think and what initiatives might be shaping around this…

2 Likes