Oh, hey, sorry for the acronyms.
AP = ActivityPub
DPKI = Decentralized Public Key Infrastructure. I.e., any key-based system that involves self-issued/enduser-generated keys, which includes nostr, bluesky, secure scuttlebutt, ipns/libp2p, and lots of other P2P systems, as well as DID systems (a subset of DPKI systems that use a common export format called a DID)
DID = Decentralized Identifier, the aforementioned export format for key material that allows interop and thus verifiable portability of signed data. Itâs a young W3C standard not used too much in production systems except debatably by bluesky or even more debatably by cryptocurrency wallets.
VC = Verifiable Credentials, another young W3C standard that strives to make atomic blobs of information both verifiable (in the sense of signed) and portable (including RDF data via JSON-LD, or native JSON data, depending on your tooling/native context). Ironically the biggest production deployment of VCs outside of niche fields like supply chain is⊠LinkedIn employer verifications for Azure customers. Which, for as corporate as that sounds, is⊠actually a game-changing social-web technology, an anti-fraud primitive, and a small win for data portability.
My intention wasnât to bring those adjacent technologies into scope for your design process, though, and I definitely donât recommend spending too much time down those rabbitholes, they go very very far! I just thought if you were already passingly familiar with those fields of decentralized design it would help contextualize the trust list spec, which I hope is more directly relevant.
Oh, and as for âbehaviorâ in Camille Françoisâs taxonomy, I think a Mastodon thread qua data is content in the static sense (at rest, it is content on a hard disk, when displayed, read or migrated between servers, itâs content in transit). Who posted or boosted it and, more importantly, how they did so (on mobile, at 3pm, while walking around the city, in the middle of a doomscrolling session) is what that WG report defines as âbehavior,â and in the classic commercial social web context, itâs all the telemetry and account surveillance metadata, and the inferences (in the creepy adtech psychological sense of "behavioral advertising) that can be drawn from it. I dare say Mastodon hasnât really concerned itself with behavior too much, because the human-scale moderation model catches most behavioral problems; itâs the rapid-scaling platforms and the less-moderated platforms that have to worry about detecting bad-faith accounts, false accounts, sold/stolen accounts, etc.
And I totally agree that DPKI/self-authenticated systems incur a much higher risk of negative network effects brought on by ephemeral accounts, sold/taken-over accounts, etc. But itâs always a cat-and-mouse game, adversarial in the machine-learning sense: if you hardcode a system to weigh accounts over 1 year higher, youâve just raised the blackmarket price for a 1-yr-old account, which will be absorbed quickly if thereâs already a healthy market for those! Incentives are such that social systems just get socially engineered if you make it worth some clickfarmerâs while to do it. Thatâs why trust ratings should always be floats and never integers, as the ML piles up on both sides of the arms raceâŠ
Thanks,
__juan