My wording was misleading (it’s poorly mixed up with Linked Data Notification jargon). I was thinking about sending unanonymized Like
s to the author of the original post rather than to the server (or possibly moderators of the community) alone.
Also, I was thinking in terms of the protocol rather than UX. At the protocol level, delivering an activity to an actor’s inbox
implies the addition of the activity in the inbox
collection, which is why I think it’s not good idea to target (unanonymized) Like
s to the author of the object
. At the UI level, yes, you don’t necessarily want to generate “notification” for every vote. That’s completely up to the implementation.
But in this way, you would only trust the server of the original object, which explicitly opts in the extension mechanism (and its contracts). I thought your concern was that you cannot practically trust arbitrary audience servers to respect the targeting of Like
activities, and I think that limiting the recipient of the activity to a single server of the sender’s choice could reasonably enforce the targeting.
If an implementation opts into the extension mechanism and still ignore the targeting of the activities, you could safely defederate the implementation for its malicious behavior, just as you’d defederate an implementation that intentionally expose private replies.
I was conservatively assuming the likes
collection and an inbox forwarding-like convention, but if there’s no such convention in the real-world Threadiverse (again, I’m not familiar with the Threadiverse), that’s just fine.
Yes, you’d need to get the Like
activities from the voters’ servers if you don’t trust the server of the original object, but what’s the point of that exactly? If the sending users of the Like
s are anonymized by the sending servers, the only information the audience servers can get is the “voting patterns” (as the piefed.social post you liked in the OP puts) with no “reputation” information. To me, this seems to rather increase the number of SPoFs that can manipulate the vote count.
You might be able to detect obviously bad voting patterns (like mass-voting in a narrow time window, though note that malicious servers could easily fake published
of Like
activities), but I expect (well, it’s just a wild guess) many of bad behavior are not that obvious by the voting patterns alone. Perhaps statistical analysis helps with a reasonable accuracy, but ought that to be the only solution to the moderation?
Also, I had an impression that Threadiverse implementations tend to put strong trust in the community, partly because Lemmy trusts the contents of posts forwarded by followed communities without any cryptographic verification (not that I like the idea), and PieFed’s Page
objects seemingly embed remote objects in the replies
collection (well, its replies
seems to be a plain array of Note
s instead of a Collection
as specified in the Activity Vocabulary, but I digress) without cryptographic proofs. But, again, I’m not familiar with the Threadiverse and I might be misunderstanding the trust model there.
As a random idea, sending an unanonymized Like
to the server of the original post and distributing an anonymized Like
to the audience might be viable at the same time, by dropping the actor
property (inspired by @trwnh’s idea) of the activity when fetched by an unauthorized client. The anonymized activity here would entail the fact that the sending server claims that an (unnamed) actor hosted by it has Like
d the object
. This wouldn’t expose the voting pattern information, for better or worse, but it would let the audience verify the count of votes claimed by the voting servers while allowing fine-grained moderation by the original server.