About Child Safety on Federated Social Media

I think DSA is going to make it a real question for instances with reporting requirements relating to moderation decisions in the EU as outlined by Daphne Keller here. Even if an instance isn’t held to those requirements, if they want to federate with those that are (whether commercial/mega or noncommercial/cozy), they’ll have to basically honor them by proxy or somehow filter out the half of their content that poses a risk to their federation counterparties in the EU. As Thiel’s research linked above points out, instance operators are responsible for CSAM “stored on their systems” even if that’s just by following a user on another instance with less attention to these issues. To some degree, the laws against this stuff have always applied as much to the fediverse as to the non-commercial web, just a little selective/under-enforced as a courtesy. That might not be as possible now that it’s being automated and API-based rather than being discretionarily enforced…

I am the first to admit that child safety scare tactics (particularly in the US and UK) are currently being weaponized rampantly by the right, but as Federico’s link above points out, there are still evidence-based ways of scoping the problem to actual problems and not stupid culture wars. I think the dissemination of CSAM is an actual problem for any open system-- it incentivizes everyone to federate less and then we’ll have a few different fediverses: the non-commercial instances that can trust (and audit) each other’s moderation systems, partially federated to the commercial mega-instances that also have tooling to reciprically enforce moderation minimums and reporting requirements, and a third federation that those first two can’t afford to risk federating with. i’m not counting the japanese porn-iverse or the naziverse, so i guess there’s 5?

3 Likes