I’m starting a research project about content moderation in the fediverse. One aim of this work is to help administrators define an ethical framework for their instance through a set of values and norms, then translate this model into rules as well as parameters and technical features on their instance, and to provide them with the most appropriate moderation tools possible to enforce those rules.
The first step for me is to better understand how moderation is being conducted in the fediverse, how you, as an administrator, handle problematic cases (be it spam, aggressiveness, harassment, potentially disturbing or illegal content), which moderation features you use most often, and what tools are currently lacking and would have been helpful in a situation that you encountered.
If you’d like to share one or more stories about your moderation experience, and optionally tell how an ideal tool or configuration of your instance could have helped you deal with the situation more easily, or if you have some resources that I could use to better understand the pros and cons of the decentralized and federated model regarding content moderation, your contribution would be a great help.
If you’re interested by the progress of the work, if you’d like to take part in future experiments or collaborate more closely if you’re already working on similar questions, just let me know in your comment, and I’ll do my best to keep you informed and shortly come back to you with more specific questions, hypothesis and/or prototypes to test