“nomadic identity” is cool but it isn’t the only solution or approach to the general problem. what you really want is an identity layer that is:
- consistent
- location-independent
so cryptographic (key-based) identity is consistent and location-independent, but it brings with it a whole bunch of other restrictions and requirements. generally to link keys to identities you need something to act as a “key server”.
if you want to stick to name-based identity, then you can still achieve consistency and location-independence by having all your systems coordinate with an authoritative “name server”.
i think a lot of people make the mistake of treating or assuming http(s) identifiers are more location-based than they actually are, when they don’t have to be. you just need a “name server” to sit in front of the actual locations. the “name server” acts as a proxy or gateway to resolve resources no matter where they are actually located. the way this is generally accomplished in http(s) by using http redirects. examples of “name server” approaches include:
permanent URLs (PURL)
permanent url (PURL) systems work by having a domain name that you expect to be longer-lived than your own, and that domain can redirect to the current location of whatever resource you want to be available consistently. look at services like purl.org or w3id.org
a “resource name system” (like webfinger)
webfinger can be used to resolve “resource descriptors” for a given resource if you know its identifier. you can then follow a link from there (probably a self
link) to get to the resource itself. this is similar to what we currently do on fedi for acct: uris mapping to actors, but this could be extended to apply to any resource and any uri scheme, not just actors and acct:. on a network-topology level, you would need to pick your webfinger resolver in the same way you would pick a dns resolver. webfinger resolvers would need to be able to delegate queries to remote authorities, although they can cache the results of those queries. in general, this kind of “resource name system” would act as a layer on top of the existing “domain name system”. it’s kind of redundant if all your resource identifiers are just http(s) uris on the same domain as the authoritative one, because in that case you can just use http redirects. but it’s useful if you want to be able to handle identifiers in other schemes, or to make resources available over multiple protocols (since you can declare all identifiers as aliases
of each other)
DID service
+ relativeRef
DID Core defines a query parameter service
which lets you pick a specific “service endpoint” out of a DID document, and then a query parameter relativeRef
lets you append a path to that endpoint. this makes the “service endpoint” into a variable base uri, against which you can resolve relative references. you can also define operations against a service of a specific type, depending on that type (and whatever associated protocol you are following). for example, FEP-e3e9 describes how this might be done with did:web and also by turning every actor uri into its own service endpoint (against which you can resolve relative references). another example is atproto’s “placeholder server” which serves did:plc identifiers which contain a AtprotoPersonalDataServer
service that points to your current PDS.
a dedicated nameserver’s role
so far we’ve talked about ways to resolve identifiers that link to locations, but what’s missing is the ability to mint identifiers within the authoritative namespace. taking the atproto “placeholder server” as an example again, this is what they say at https://plc.directory and specifically did:plc Specification v0.1
To summarize the process of creating a new did:plc
identifier:
- collect values for all of the core data fields, including generating new secure key pairs if necessary
- construct an “unsigned” regular operation object. include a
prev
field with null
value. do not use the deprecated/legacy operation format for new DID creations
- serialize the “unsigned” operation with DAG-CBOR, and sign the resulting bytes with one of the initial
rotationKeys
. encode the signature as base64url
, and use that to construct a “signed” operation object
- serialize the “signed” operation with DAG-CBOR, take the SHA-256 hash of those bytes, and encode the hash bytes in
base32
. use the first 24 characters to generate DID value (did:plc:<hashchars>
)
- serialize the “signed” operation as simple JSON, and submit it via HTTP POST to
https://plc.directory/:did
- if the HTTP status code is successful, the DID has been registered
When “signing” using a “rotationKey
”, what is meant is to sign using the private key associated the public key in the rotationKey
list.
so the process basically boils down to a more-or-less-content-addressed 24-character identifier generated through a basic algorithm involving some cryptography, encoding, hashing, encoding, truncating. not too dissimilar from how FEPs generate their identifier via truncating a sha256 sum of the plaintext title.
the problem is, of course, getting everyone to agree to use the same methodology to consistently generate id components that can be used to mint an identifier.
i think in http land this is probably best accomplished currently via something like the Slug
header, which lets you suggest some string material to be used as input while generating an identifier on the nameserver. if the input leads to a conflict, the nameserver will ignore your input and generate its identifier without that slug.
of course the exact method of generating identifiers is up to each nameserver. it could hash your content, possibly combined with some other disambiguating information about the request. or it could do literally anything else.