Improving fediverse culture and social behavior

@Sebastian mentioned an article in another topic, that led me to create this topic…

The article contains a serious distress call, and problems to discuss and (hopefully) address. The article has some a) rude use of language b) personal accusations, mentioning individuals by name c) unwarranted generalizations. So going from that basis, here’s a summary of what I read in the article:

Summary: Fediverse is crumbling

Problem statement:

  • Fediverse is inherently unsafe for many of its fedizens (even more so than Twitter was).
  • As a consequence there is a big outflow of fedizens (‘fediverse is crumbling’)

Root causes:

  1. anyone can spin up instances and start federating
    • controversial instances often go undetected for long times
  2. trolling & harassment (esp. faux-woke racial-based)
  3. cancel culture (exercised via trolling & harassment)

Other factors:

Related to the technology and software…

  1. Unhelpful stance from some project teams / developers (to fix things?)
  2. ActivityPub protocol is bare-bones, developing microblogging alternatives is hard
  3. Mastodon technology choices / tech debt hamper fediverse evolution [loosely interpreted]

Related to community and culture…

  1. Fediverse has no meaningful distinct culture (other than ‘Twitter expanded universe’)
  2. The mindset is toxic (e.g. related to politics and consistent use of content warnings, CW’s)

Proposed solutions:

  1. (Uncertainty about whether the problems can be fixed at all, ‘the gap is too wide’)
  2. A vastly better system for CW’s (but not detailed in the article)
  3. (Maybe an alternative to the fediverse that addresses the problems)


Alright, going from this summary, some of my thoughts and opinions…

  • Though I haven’t experienced any of it myself, I recognize there are structural issues.

  • All root causes are related to social phenomena that can be mitigated by:

    • Proper moderation procedures and more powerful moderation tools (see also here and here)
    • Healthy culture and active community engagement
    • Technology improvement / extensions / evolution
  • All apps being FOSS means anyone can spin up instances, commercial or otherwise, and federate

    • I am fully fine with that (software freedoms), moderation practices may lead to speedier blocking
    • If you want to restrict this, you enter the realm of ethical licenses and legal action to enforce them
  • Software teams / devs might be unfriendly, which is unfortunate, but it is as it is

    • You can try to improve project culture, or fork the project, or use an alternative, or leave fedi indeed
    • OSS maintainers are often harassed themselves, dealing with people feeling entitled to see changes
  • I am not at all worried about Mastodon project direction

    • Within their project they are free to choose whatever in terms of features and roadmap
    • PROVIDED THAT here at SocialHub and SocialCG we set out the technology standards for fedi
    • (First and foremost we should be thankful to Mastodon in how it managed to popularize fedi)
    • (Mastodon’s compliance level is also their choice, we can encourage compliance, but that’s it)
    • (Similarly not at all worried about usage metrics of individual instances. Overall fedi health counts)


All in all I see a couple of tracks by which we can improve the situation:

  1. Fostering open, flourishing fedi culture (advocacy)
  2. Ease of development, ease of use (onboarding)
  3. Improved moderation practices and tools (control)
  4. Improved integration, interoperability (technology)
  5. Improved coordination, collaboration (community)

I’m not sure I share this absence of local culture on the Fediverse, but I guess that will vary from person to person.
I do think that onboarding is the most important step, but apart from having more moderators or admins getting more time to do so, this seems hard to change… maybe promoting sign up via request only could help?

In the meantime, here are some ideas about technico-cultural ways we could handle federation a little differently and thus lower the load on moderators (and thus hopefully make time for onboarding):

  • use server discovery and invitations to make Federation opt-in:
    • every time a new server is encountered due to boosts or an invite from a user on that instance, the admin/mods get a notification
    • it’s then up to them how they decide to federate or not with the other instance
  • make changes to federation announced
    • promote openness about (de)federation
    • set up technical tools so that it’s automatically broadcast to all users on the instance (e.g. using Mastodon announcements)

In particular, I think opt-in federation is crucial for platforms such as PeerTube, where combating disinformation is time and energy consuming but, due to automatic federation, disinformation will automatically be shared by unknowing instance until they notice and have time to act upon it.
As far as I know (not being a PeerTube admin, nor dev), it is already possible to manually approve federation in PeerTube, so all that’s left is to make it part of the culture I guess.

1 Like

Well, no matter what is more important, encouraging to watch this video, a Keynote of ActivityPub Conf 2020 ;

My major takeouts: Smaller instances (limiting), Onboarding means “sign up via request only” here.
My software wants to build a fediverse of trust and I tried to describe first ideas

Because I was fulltime for 2 weeks in contact with german and EU politicians and german experts (unfortunately people from other continents do not know exactly who they are).
The general demand for the EU is to build an European EU funded Social Network as it was described to the german parliament by Anke Domscheit-Berg …

Another demand is “Use and support of open-source interconnected social media platforms by the Commission” by MEP Patrick Breyer


use server discovery

there might be a strictly technical way and we will brainstorm about it in 3 hours about

“so when you as an admin were peering with another instance you are showing your set of values, and if that other instance believes that they are sharing those values, that instance can peer with you”

To the rest I agree.

Apart from advocating for improving mastodon and making advertising on a political level (as described before) in the last 2 weeks 3 scientists from different fields approached
private about

  1. questions about intransparent content moderation in mastodon
  2. hosting instances for people unhappy with content moderation (from AWS grant …)
    and public
  3. survey about content moderation in mastodon

Because reactions to the mentioned article range currently from
“to less sources named” until the opposite about not “mentioning individuals by name”,
I will sum up only my ideas here what we as users can do to improve overall engagement

Let us think about encouraging formats

The /about pages could have two video links

  • What is the Fediverse? [5min. video t.b.d.]
  • What is mastodon?
  • encourage to first toot #introduction [or the equivalent localised intro-hashtag]

mp3 posting is nice on mastodon (or peertube).
In germany we have a nice format as a refesher or meta communications and it is
#DieStimmenImFediversum invented by KaptainRio / DE intro
Proposal: Do it and introduce also #TheVoicesOfFediverse and other languages …

Weekly recurring hashtags
Maybe the best way to improve engagement because people keen on the topic will be active at least 1 day in the week.
A friend of mine does -> #fridayquiz and it makes engagement (and also encourages/teaches fact-checking)

following is not directly solvable, just brainstorming
I like the weekly summaries of e.g. discourse.
Google+ did it in the form of posts from people you interacted with most and most relevant topics and it engaged to come back to comment.

is borrowed from HopIn, the world’s leading (proprietary) “all-in-one virtual event platform”
[not checked if they borrowed it from Open Source] :
Let’s say, you have a networking button which connects you to random people.
I was so surprised at digitaleurope Masters of Digital, it is like a lottery.
It was perfect, you meet people across the whole democratic spectrum, some very clever and if you “win” you have a new fellow. Improves connections (follow by random discovery, not only algorithm suggestions)

How can we do both: Send people to the streets (well theoretical, I know about pandemic) and make them frequently come back with content sharing to mastodon?

Hopefully more ideas soon after today. It’s live since 5 minutes.

1 Like

Improving moderation practices and tools

I really like your ideas, @tfardet. Now, tbh, I don’t know anything about how instance admins currently go about their business and which automatic tools they have at their disposal versus manual procedures. And I wonder whether more documentation and discussion at SocialHub are in order. Maybe just like Guide for ActivityPub users and Guide for new ActivityPub implementers we need a Guide for ActivityPub Admins

Federated Moderation

These are highlighting features for end-users to be offered on an instance. I wonder (and this may already exist, idk) whether the whole instance discovery / mapping of the network should be federated. E.g.:

  • A new server is detected
  • Instance updates internal server list
  • Instance federates (Announce) the new server
  • Other instances update their server list
  • Blacklisting / whitelisting is announced (with reason)

Then in addition to that Moderation Incidents can be collected as metrics and federated as soon as they occur:

  • User mutes / blocks, instance blocks (without PII, as it is the metric counts that are relevant)
  • Flags (federated after they are approved by admins, without PII)
  • Incidents may include more details (reason for blocking, topic e.g. ‘misinformation’)

So a new instance pops up, and all across fedi people start blocking its users. There’s probably something wrong with the instance that may warrant a blacklist. Instance admin goes to the server list, sees a large incident count for a particular server, clicks the entry and gets a more detailed report on the nature of said incident. Makes the decision to blacklist or not.

Pixelfed new moderation tools

I just saw @dansup Pixelfed’s toot announcement of new moderation tools:

Our safety and moderation tools like disabled comments, spam detection and violation warnings are unique in the fediverse. We have been working on a new generation of safety and moderation tools that will be rolling out soon!

Here are the screenshots that came with the toot:

I like the idea of instance discovery and blocks being federated:

  • I think it makes sense that admins would want to know about instances other people they follow (and therefore at least somewhat trust) decided to (de)federate with
  • it hopefully also improves visibility of bad actors (I recently discovered an openly nazi instance that my instance was federating with unknowingly whereas a neighbor instance had been blocking them for some time…

Regarding PeerTube, I can add additional information after some research: federation on PeerTube is already opt-in!
Unfortunately it seems that some (many?) admins use automated tools to auto-federate with other instances listed on the main server list…

I also think that automatic discovery coupled to opt-in via manual acceptance from the admin could be a nice way to introduce more user participation: one could imagine the admin making a poll to ask people whether they want to (de)federate with instances. Of course it requires some more thought and it might not always be reasonable/doable for people to vote on all of them, but maybe for specific cases it could be useful.

1 Like

5 posts were split to a new topic: Change redaktor category accessiblity setting?

Improving moderation practices and tools

The following idea is brainstorm. I don’t know if this is feasible at all.

Delegated moderation

When having Federated Moderation it may also be possible to delegate moderation tasks to admins of other instances who are authorized to do so.

I have described this idea already, but from the perspective of Discourse forums having native federation capabilities. See Discourse: Delegating Community Management. Why would you want to delegate moderation:

  • Temporarily, while looking for new mods and admins.
  • When an instance is under attack by trolls and the like
  • When there is a large influx of new users

But this extension to the Moderation model goes further… we can have Moderation-as-a-Service. Experienced moderators and admins gain reputation and trust. They can offer their services, and can be rewarded for the work they do (e.g. via Donations, or otherwise). They may state their available time and timeslots in which they are available, so I could invoke their service and provide 24/7 monitoring of my instance.

The Reputation model of available moderators might even be federated. So I can see history of their work, satisfaction level / review by others, amount of time spent / no. of Incidents handled, etc.