Could nomadic identity be implemented without relying on publicly forwardable objects?

I feel like nomadic identity (or something else that brings the same benefits of seamless account migrations and multi-homing) is table stakes for any replacement to contemporary ActivityPub (which may still be ActivityPub, but one that will likely need to break some backwards compatibility with current implementations, hence “contemporary ActivityPub”).

However, I believe there’s a major issue with the current proposals (mainly FEP-ef61) in that it requires objects to be self-authenticating, which completely nullifies the best-effort access control we currently have through HTTP signatures (let’s skip the inevitable debate about what “public” means for this thread, OK?), and means anything you say can be irrevocably and undeniably be associated with you, forever, among other issues people with a better understanding of the problem than me can bring up.

Is this problem even solvable?

1 Like

I don’t really see how. I don’t really understand the distinction between self-authenticating and “regular” authenticating messages in the first place. I mean, using HTTP Signatures means that the JSON object is not self-authenticating but you can just forward/copy/save the entire HTTP request and that will be self-authenticating, if I am understanding the word correctly. Anyone could take that request and know that the JSON object is authentic.

I mean… honestly I think if you are on the internet, you just need to accept this as a fundamental assumption from the start. People back in the 90s and early 00s knew this - remember how people were like “careful what you write on the internet” and “remember to never use your real name” and such? Mainstream social media has softened up this view by making us so used to (over)sharing our personal information.

Anything you write can be saved, forwarded and copied by anyone you sent it to. Maybe I’m wrong but I don’t see a world where you can have a message that only some people that you approve of will be able to verify while other people that you block or ban or whatever won’t be able to verify. Maybe end-to-end-encryption is a way to do that? But that’s a whole other can of worms and complexity. And then we’re definitely not talking about anything resembling a public message.

1 Like

Some DIDs support key rotation. If information about a public key is removed from the DID document, all past signatures become unverifiable. Notably, did:key doesn’t support key rotation, but you can publish your secret key, that technically makes all your signatures meaningless. More advanced cryptographic techniques exist, such as Ring signatures, but I don’t know to what extent they are supported by Data Integrity standard (upon which FEP-ef61 is based).

But then the people who you would like to still be able to verify your messages lose that ability, right? Then nobody can verify your stuff.

Yes, but that shouldn’t be a problem if your contacts have already seen these messages. Those who want to prove the authenticity of your message wouldn’t be able to do it (I guess this is what people are worried about).

“nomadic identity” is cool but it isn’t the only solution or approach to the general problem. what you really want is an identity layer that is:

  • consistent
  • location-independent

so cryptographic (key-based) identity is consistent and location-independent, but it brings with it a whole bunch of other restrictions and requirements. generally to link keys to identities you need something to act as a “key server”.

if you want to stick to name-based identity, then you can still achieve consistency and location-independence by having all your systems coordinate with an authoritative “name server”.

i think a lot of people make the mistake of treating or assuming http(s) identifiers are more location-based than they actually are, when they don’t have to be. you just need a “name server” to sit in front of the actual locations. the “name server” acts as a proxy or gateway to resolve resources no matter where they are actually located. the way this is generally accomplished in http(s) by using http redirects. examples of “name server” approaches include:

permanent URLs (PURL)

permanent url (PURL) systems work by having a domain name that you expect to be longer-lived than your own, and that domain can redirect to the current location of whatever resource you want to be available consistently. look at services like purl.org or w3id.org

a “resource name system” (like webfinger)

webfinger can be used to resolve “resource descriptors” for a given resource if you know its identifier. you can then follow a link from there (probably a self link) to get to the resource itself. this is similar to what we currently do on fedi for acct: uris mapping to actors, but this could be extended to apply to any resource and any uri scheme, not just actors and acct:. on a network-topology level, you would need to pick your webfinger resolver in the same way you would pick a dns resolver. webfinger resolvers would need to be able to delegate queries to remote authorities, although they can cache the results of those queries. in general, this kind of “resource name system” would act as a layer on top of the existing “domain name system”. it’s kind of redundant if all your resource identifiers are just http(s) uris on the same domain as the authoritative one, because in that case you can just use http redirects. but it’s useful if you want to be able to handle identifiers in other schemes, or to make resources available over multiple protocols (since you can declare all identifiers as aliases of each other)

DID service + relativeRef

DID Core defines a query parameter service which lets you pick a specific “service endpoint” out of a DID document, and then a query parameter relativeRef lets you append a path to that endpoint. this makes the “service endpoint” into a variable base uri, against which you can resolve relative references. you can also define operations against a service of a specific type, depending on that type (and whatever associated protocol you are following). for example, FEP-e3e9 describes how this might be done with did:web and also by turning every actor uri into its own service endpoint (against which you can resolve relative references). another example is atproto’s “placeholder server” which serves did:plc identifiers which contain a AtprotoPersonalDataServer service that points to your current PDS.

a dedicated nameserver’s role

so far we’ve talked about ways to resolve identifiers that link to locations, but what’s missing is the ability to mint identifiers within the authoritative namespace. taking the atproto “placeholder server” as an example again, this is what they say at https://plc.directory and specifically did:plc Specification v0.1

To summarize the process of creating a new did:plc identifier:

  • collect values for all of the core data fields, including generating new secure key pairs if necessary
  • construct an “unsigned” regular operation object. include a prev field with null value. do not use the deprecated/legacy operation format for new DID creations
  • serialize the “unsigned” operation with DAG-CBOR, and sign the resulting bytes with one of the initial rotationKeys. encode the signature as base64url, and use that to construct a “signed” operation object
  • serialize the “signed” operation with DAG-CBOR, take the SHA-256 hash of those bytes, and encode the hash bytes in base32. use the first 24 characters to generate DID value (did:plc:<hashchars>)
  • serialize the “signed” operation as simple JSON, and submit it via HTTP POST to https://plc.directory/:did
  • if the HTTP status code is successful, the DID has been registered

When “signing” using a “rotationKey”, what is meant is to sign using the private key associated the public key in the rotationKey list.

so the process basically boils down to a more-or-less-content-addressed 24-character identifier generated through a basic algorithm involving some cryptography, encoding, hashing, encoding, truncating. not too dissimilar from how FEPs generate their identifier via truncating a sha256 sum of the plaintext title.

the problem is, of course, getting everyone to agree to use the same methodology to consistently generate id components that can be used to mint an identifier.

i think in http land this is probably best accomplished currently via something like the Slug header, which lets you suggest some string material to be used as input while generating an identifier on the nameserver. if the input leads to a conflict, the nameserver will ignore your input and generate its identifier without that slug.

of course the exact method of generating identifiers is up to each nameserver. it could hash your content, possibly combined with some other disambiguating information about the request. or it could do literally anything else.

2 Likes

But it is the only solution that allows you to communicate without renting anything from 3rd parties.

Such as?

I don’t think so. You can use a keyserver in FEP-ef61 but it is not required.

i don’t think this is true? content addressing lets you do that too. you don’t have to rely only on key-based identity or cryptographically-verified identifiers. if the concern is specifically “renting” then you could still have a “free” name system; the only thing that you can’t really get away from in any case is that there needs to be some authority to root your identity in something. this is true for both name-based and key-based id systems.

in FEP-ef61 the role of “key server” is fulfilled by “location hints” and the gateways query parameter and actor property. on its own, you cannot resolve an ap:// identifier. given only an ap:// identifier, you need to obtain the actor representation somehow. how do you do this if you don’t know which gateways are bound to that actor, which you can only know by already having the actor? someone has to tell you at least one of the gateways when initially communicating out-of-band.

No, I don’t think you can build a messaging system based on hashes alone. I certainly haven’t seen one.

Yes, there needs to be a centralized authority, and such authority can be derived from a secret key.

Yes, some form of out-of-band communication is necessary. In a pure server-less scenario Alice and Bob meet in meatspace and exchange public keys. After that they can start sending signed messages to each other: actor documents first, then activities.