The current version of the HTTP Signatures spec requires that the signature algorithm be “derived from metadata associated with ‘keyId’”. As far as I know, no implementation actually did this, and instead assumed all signatures were rsa-sha256 (of those that even updated to the new drafts).
Note that older versions of the spec included the algorithm in the signature, but this is insecure as malicious actors could specify a weaker algorithm than the real sender intended.
I propose the following solution, roughly based on the CCG’s Security Vocabulary:
Add a signatureAlgorithm field to the publicKey field on Actors, with a value chosen from the list in RFC 6931, most likely http://www.w3.org/2001/04/xmldsig-more#rsa-sha256
Parse incoming signatures based on the signatureAlgorithm value, with a subset of possible algorithms defined by the implementation. If the key is not present, default to rsa-sha256, since this is what current implementations use.
I’ve implemented this with support for both rsa-sha256 and rsa-sha512 for incoming signatures in lotide, but would like to hear from other implementers in case there’s a better path. Additionally, what other signature algorithms would be worthwhile to add support for?
As you note, this proposal doesn’t adhere to the CCG’s proposed Security Vocabulary. The Domain of signatureAlgorithm is rightfully on a Signature, not a PublicKey. There doesn’t appear to be a good reason for breaking this besides convenience and ease-of-implmentation.
A public key’s use can be for more than just digital signatures in HTTP-Sigs. When adding this property that should be on a Signature onto the PublicKey instead, it begins to restrict this public key’s usefulness for other use cases. Those don’t exist now, because how different key management schemes interact is not really fully thought through in AP. There’s been lots of isolated high-level ideas for sure in the key management space (AP-over-SSB, avoiding HTTP Signatures entirely, using an OCAP implementation), and for developers that want to support more-than-one for different compatibility reasons (ex: being a library, or a bridge-like app) it would be nice to not start from the awkward baseline of having to consider a vocabulary violation on one side.
It’s not clear to me that the benefits of enabling something like SHA512 instead of SHA256 is worth the above costs when instead we can try to coordinate pushing something much more fundamentally better in the key management space.
I don’t understand why adding the signature algorithm to the publicKey field would reduce that key’s usefulness for other purposes. couldn’t you just list an additional signature algorithm if you wanted to use it?
I would recommend against creating a nested JSON structure for most things, if we want to use a separate key, perhaps httpSignatureAlgorithm would be clearer?
Possibly. In the spirit of infinite-flexibility the RDF definition in theory would let us list as many signatureAlgorithm as we wish.
In practice, adding it in a quick and speedy way means there will be software out there that will be written to not be able to handle more than one, much like the current state of the type property. Then there’s also the practical question of attempting to disambiguate which value in the signatureAlgorithm list corresponds to which use case.
IMO, what is missing is a one-to-many mapping from a publicKey to key-use-cases, and then a one-to-one mapping from each key-use-case to metadata (signatureAlgorithm in the HTTP Signatures use-case). And it’s a matter of finding an expression that makes it clear to implementing devs that these are the N-ary expectations.