Making sense of RsaSignature2017

All three of the most-popular AP-based servers (Mastodon, Pleroma, Misskey) use this type of signature in addition to HTTP signatures. These are used to sign the message itself, to allow it to be proxied by other instances to the instances yours doesn’t know about. Example: you follow Eugen on from your instance. His instance obviously sends you (and signs with HTTP signatures) his own posts and replies made by other users on But if someone from another instance replies to one of his posts, their instance has no idea where to send that reply for it to reach all Eugen’s followers. So proxies these to every instance where there are Eugen’s followers. This way every such instance has complete threads for all these posts. Since doesn’t have the private keys of those users from the other instances, and HTTP signatures depend on the request destination so they can’t be proxied, the HTTP signature is done with Eugen’s key, and you’re supposed to verify the LD-signature to authenticate that the reply was created by whoever it was created.

The signature generation process consists of the following steps, as I understood it:

  1. Expand the JSON-LD document.
  2. Convert the expanded JSON-LD document to an RDF dataset.
  3. Normalize the RDF dataset using the URDNA2015 algorithm.
  4. Create a JSON-LD document with signature options and perform the steps 1-3 for it too.
  5. Hash the options normalized RDF dataset, serialized in turtle format, with newlines between the quad strings, in UTF-8, with SHA-256.
  6. Hash the document normalized RDF dataset same way.
  7. Concatenate the hashes, in binary form lowercase hex, in that order.
  8. Sign that byte array with the private key of the user.
  9. Add the signature to the JSON object like everyone else does that.
  10. Be happy.

As is usual with this kind of thing, it’s only easy in the specifications. For 1 and 2, there’s a test suite to go along with the JSON-LD spec I used to generate myself some unit tests. I pass all of the expansion tests and all of the important ones among the to-RDF tests (I ignore some IRI resolving edge cases because there aren’t going to be file URIs and such in real life). I represent my RDF datasets as arrays of subject-predicate-object-graph “quads”. For the purpose of the to-RDF tests, I first assert that the array sizes are equal, and then that my actual output contains each of the quads from the expected output, effectively comparing the arrays disregarding their order.

For URDNA2015, there’s also a test suite. I pass some tests from it (42 out of 62 at the time of writing). The issue is that the LD-signature generation algorithm I described above hashes the URDNA2015 output as-is, with Mastodon implementation joining the Turtle-serialized strings with newlines. Because we’re doing hashes, the order of the strings is obviously important since differently-ordered strings will produce different hashes.

None of the specs say anything about the ordering of the RDF quads. The JSON-LD spec doesn’t mention it but the test suite page tells you that you need to compare them using “RDF isomorphism” which, if I understood all that type theory mumbo-jumbo correctly (gosh, it feels like making a Telegram client from scratch again), means you have to disregard the order which is what I do for my tests. The URDNA2015 spec doesn’t say anything about the ordering either. But For each quad, quad, in input dataset: as the last step, even though you sort things internally, kinda implies that you’re supposed to keep the order of the original dataset. The test suite page doesn’t say anything about ordering either.

Interestingly enough, some of the URDNA2015 tests themselves are like this:


_:b0 <> <> .
_:b0 <> <> .


_:c14n0 <> <> .
_:c14n0 <> <> .

Notice that not only the blank nodes are relabeled, which is the actual purpose of that normalization algorithm (why do we need this in the first place?!), but the lines are also swapped.

If I serialize the RDF quads and then sort the strings lexicographically, that seems to dramatically increase the number of tests passed but some still fail. Is that what I’m supposed to do even though it isn’t written anywhere? Could someone enlighten me please?

p.s. I have to add that the algorithm descriptions in the specs are clearly written with a dynamically-typed language in mind. It was a PITA to convert that into Java, and I might’ve made some mistakes which is why not all URDNA2015 tests pass yet.

I don’t know the specs super well, but I can at least demystify what Mastodon is doing. We delegate most of this to the json-ld and rdf-normalize gems, you can look there for more details of what we’re doing.

  1. create an “options” json-ld document with the keys creator and created. When verifying JSON-LD objects, this is the signature object, minus the keys type, id, and signatureValue, plus an appropriate context
  2. take the input json-ld document, and remove the signature property.
  3. for each json-ld document, use the json-ld gem’s toRdf method, and then the rdf-normalize gem’s .dump(:normalize) method.
  4. Take the SHA256 hash of each normalized graph.
  5. concatenate the two hashes in the order options_hash + document_hash, and then sign that using Ruby’s stdlib openssl bindings, using the sha256 algorithm.
  6. Finally, base64 encode the signed value.

Here’s a step-by-step example I put together, with lots of intermediate stages:

To answer your specific question—it does appear that we sort the normalized statements, which I think we can attribute to this part of the rdf-normalize gem:

I don’t know what part of the spec that corresponds to—If I had to take a guess, I think this code exists because the Normalization Algorithm returns a dataset, rather then a string. See the following editor’s note from the RDF normalization spec:

This specification defines a normalized dataset to include stable identifiers for blank nodes, but practical uses of this will always generate a canonical serialization of such a dataset

This seems to be a gap in the JSON-LD signature spec, which is assuming that the normalization spec is producing a canonical value, when instead it is producing a canonical dataset, which then needs to be serialized into a canonical value. This gap is kind of confusing, because both specs have the same authors. I’m going to file a ticket against the ld-signatures spec and try to get to the bottom of this.

1 Like

I’ve opened an issue against the LD-Signature spec to get clarification on this topic, you can take a look here:

1 Like

Thank you very much for your clarifications and example! The example will be invaluable for me to test my code.

The normalization algorithm handles ordering. The output nquad list for two JSON documents with completely different ordering should be identical. That’s the primary purpose of the normalization step.

[2019-11-26 00:20:19+0000] Gregory via SocialHub:

All three of the most-popular AP-based servers (Mastodon, Pleroma, Misskey) use this type of signature in addition to HTTP signatures. These are used to sign the message itself, to allow it to be proxied by other instances to the instances yours doesn’t know about.

Actually Pleroma doesn’t do any JSON-LD, external messages are just compatible with so it can federate with others, therefore we do not have JSON-LD Signatures, only HTTP Signatures. And we also considered it to be quite insecure or a breach of privacy as it goes against deniability (and so Mastodon now only uses it for public posts).

can you point to where in the spec this ordering is defined? As far as I can tell, while the normalization algorithm talks a lot about doing things in a specific lexicographical order, the actual output dataset has no inherent order.

Yes. The final step is to just iterate through the input quads, with unspecified order, and replace all the blank node identifiers with other blank node identifiers.

So, I did ultimately get it to pass all the tests. A summary for anyone else brave enough to implement that themselves:

  • The normalization spec doesn’t say that but when doing “Hash First Degree Quads” you need to join the strings with newlines and add a trailing newline, then hash that (in UTF-8). My hashes weren’t right and it ended up giving wrong blank node ids sometimes.
  • After normalization, serialize the quads into an array of strings and sort those lexicographically. Then do the same thing – join with newlines, add a trailing newline, hash.

Now on to the interesting part: testing if I’m actually able to verify the signatures generated by a real Mastodon instance!

And so how do you verify these proxied posts? Do you just drop them or do you fetch them upon receiving such an activity? Does Pleroma itself proxy replies like Mastodon does?

Again, something the specs don’t make obvious: even if your language does have a distinction between doubles and integers and you do know which is which after parsing JSON, you still do have to convert doubles to integers if they lack a fractional part. No, “if value is a number with no non-zero fractional part” isn’t a fancy way of saying “if value is an integer”, it’s literally what it is.

Strangely, the JSON-LD test suite doesn’t include a test for this.

I was finally able to verify a signature from Mastodon! Correction: the two final sha256 hashes need to be lowercase hex strings, not byte arrays. I’ve updated the first post with that.