Experiences and aspirations about content moderation in the fediverse

Hi everyone,

I’m starting a research project about content moderation in the fediverse. One aim of this work is to help administrators define an ethical framework for their instance through a set of values and norms, then translate this model into rules as well as parameters and technical features on their instance, and to provide them with the most appropriate moderation tools possible to enforce those rules.

The first step for me is to better understand how moderation is being conducted in the fediverse, how you, as an administrator, handle problematic cases (be it spam, aggressiveness, harassment, potentially disturbing or illegal content), which moderation features you use most often, and what tools are currently lacking and would have been helpful in a situation that you encountered.

If you’d like to share one or more stories about your moderation experience, and optionally tell how an ideal tool or configuration of your instance could have helped you deal with the situation more easily, or if you have some resources that I could use to better understand the pros and cons of the decentralized and federated model regarding content moderation, your contribution would be a great help.

If you’re interested by the progress of the work, if you’d like to take part in future experiments or collaborate more closely if you’re already working on similar questions, just let me know in your comment, and I’ll do my best to keep you informed and shortly come back to you with more specific questions, hypothesis and/or prototypes to test :slight_smile:

Thank you!

been handling content moderation for cybre.space as part of a two person team since 2017 (and a three person team since earlier this year), so I’ve seen a lot of different mastodon instances and moderation approaches. Don’t have time to type up a long response right this minute, but definitely feel free to let me know if you have any specific questions.

1 Like