+1 to wiki-fying / pulling in those items into here at least.
I also highly agree that Account Portability / Nomadic Identity / Export feature (and I know there’s some overlap with all three) would be incredibly important in the Fediverse’s relationship with Meta. Possibly the most important feature, honestly.
I propose that we approach this from two directions:
Lobby Threads to commit to providing Export functionality (specifically, exporting the profile, social graph, and posts), as well as supporting any relevant redirects (“this user has moved over there”…).
Focus on getting Export/Import/portability working on as many implementations in the Fediverse as possible, as a leverage for the above (and to set a good example).
Content licensing seems relevant. If not a hard protection, it’s at the very least a form of activism by giving me a way to explicitly say my content is not allowed for use in day farms, closed AI etc.
I think a good analogy for “reply control” is quote posts. It’s easy to implement quote posts in a client. So people have done it.
Comment control on apps is kind of “nasty” as it wouldn’t work across different apps.
People seem to lack the imagination necessary to just use a “#DonTComment” hashtag in the base post to indicate that comments should not be displayed.
This would work nicely across apps and not require any slow Mastodon changes.
I’m probably an awful person for suggesting the quick fix. The ActivityPub tech stack is just not ready for hard soluitons.
Also note that I think #NoReply is superior to #DonTComment. This is a dirty solution, in the sense that it adds technical debt. However, it is a solution that can be implemented without creating too much technical debt. It is clear that this is a “hack” of a “hack”, i.e. hashtags, and thus should be removed once better solutions are available.
Exactly: to get focus one these collective set of potential additions - that all solve for one very big use case of being a tighter ship for when Threads federates in with what I assume by that point will be 300 million users or more by that point - many of whom will be moderation issues. How to prevent abuse, how to empower users to fortify their own accounts and how to give admins a few more tools in the toolbox.
I’m no moderator or AP developer but I do have a (not currently specified, to my knowledge) suggestion I’d like to throw out there as it relates to moderation:
The ability to limit new users from a remote instance could allow certain Threads users to interact with an instance, while automatically keeping out everyone else. It could also be useful during remote instance spam attacks, to allow existing people to continue interacting as normal while keeping out the wave of spam accounts.
I don’t think this should be a “deny” but rather an “ignore,” so that once the spam attack is over, legitimate users who registered during the attack can interact once the limitation is removed.
Admins should be able to manually approve new users even if the setting is on.
In Mastodon, I think this would be best done as a check-box next to the existing “limit/mark media” options, so that you can combine it with “silence” for some servers and “none” for others (to allow existing users to still appear in public timelines).
This might be useful for firehose bridges, especially if some pattern matching can be added (i.e. by domain suffix), like mostr.pub and the future Bluesky bridge(s).