Reconciling different roots of identity

If we specify HTTPS URI as canonical, then this is going to forever bind us to the shortcomings of the HTTPS protocol, particularly in how it is bound to an “authority component” stemming from DNS domain names, and a “path component” that also cannot change. This is perhaps not insurmountable, but it is a centralizing force in the long-run because the most feasible way to surmount it is to have a single authority stemming from a single domain name. Even in the DID space, Bluesky’s ATProto punted on this by setting up the Placeholder DID Method, did:plc:, which mints and resolves identifiers based on a single centralized “placeholder server”. ATProto also supports did:web, but notably does not allow for migration or data recovery… just like the https: World Wide Web itself. Some kind of canonical identity is needed that is external to the domain on which your server currently runs, if you want any sort of resilience to the original server or domain going down or becoming unavailable or not cooperating.

I’m not proposing fully-featured DID integrations, but I am proposing a general alignment on whether, when, and how to consider the id as non-authoritative, and what to consider authoritative in its place. At least something to consistently refer to and identify a resource as being “the same as” another resource. This would also allow for obtaining the resource from more than one server, and considering multiple servers as “authoritative” given that they are using the same source of truth. Think of how multiple different IPFS gateways can return the same file, but the HTTPS URI will depend on which gateway you used.

Basically, you could imagine something like a DHT which points to your “current location”, but this would require a network-wide agreement on using such infrastructure. You could also imagine something like a centralized nameserver which did nothing but mint and resolve identifiers via HTTP redirects, akin to a PURL service but with an API and support for ActivityPub content negotiation… but this again requires a network-wide agreement on using such infrastructure. Any solution you come up with would require the same. And ideally, in the same way that we have multiple DNS resolvers, we could really use something like multiple Webfinger resolvers, but crucially allowing for any resource and not just acct: resources. There’s a lot to learn from the way that ATProto handles the concept of a “personal data server” or PDS.

You could do an escrow model, but yeah, this still requires agreeing beforehand that the cryptokey scheme and infra should be used as more authoritative than the HTTPS id. Bringing it back to how account “migration” is done, via the Move activity, you basically need some acceptable verifiable proof that the new account has the same controller as the old account. This can be done via bidirectional links, cryptographic proofs, etc. But all these schemes need to be “blessed” somehow by various fediverse implementations, or else they won’t work in a useful manner. For example, Streams implements nomadic identity based on the cryptographic signature of the ids. You can export and import or otherwise copy your keys to another server, and both servers are considered authoritative, because both servers have your private key and can sign messages on your behalf. If Mastodon encountered both of these actors, it would consider them different actors, despite them being the same on the Zot/Nomad networks. This is signalled via alsoKnownAs from the DID spec (and ActivityStreams namespace), which signals that the given identifiers are used for the same resource. However, this runs directly contrary to how Mastodon is currently using alsoKnownAs for its “account aliases” feature which is used as a prerequisite for verifying a Move activity is valid. More here: Defining alsoKnownAs - #30 by trwnh

2 Likes