Bearcaps OT

I am curious now regarding the content that lead to bearcaps and it’s suggested usage in AP / AS / JSON-LD.

I am looking at it right now from a perspective where the nodes in the federation are the lowest level of trust. (like, S2S only, leaving out C2S for the moment.)

Given the use-case that it should be enforceable to stop objects from spreading outside of the follower-followed relationship boundary, e.g. when I post a note to my followers them, and only them, should be able to read the note. For the moment I would like to exclude Announce.

The naive way would be that when I compose the note and my server federates it via S2S, it will be sent to the servers of my followers. The receiving server will add the note to it’s internal storage and I trust the receiving server to show this note only to the users it was addressed to.
To ensure on our side that the note not (accidentally) is readable from the outside, we would protect it by authentication.
This then is it.
Because we trust the remote server not showing the note to people that are not the recipients, we can just give it the full note (Create?) , object being embedded and not a reference. Because the note is embedded the remote server does not need to care about that it cannot dereference the ID; it has everything it needs.

Now bringing in Announce, Announce and private note somehow are incompatible. Unless you Announce it just to the followers that are allowed to see the note. This announce then would only include an object reference, because I think one can assume the receiving user’s servers already have the note. (More about what not later.)

Now reading the public outbox of the followed, the private note won’t be included there, because it’s the public outbox and private notes are inherently incompatible with public outboxes. Unless you are somehow authenticated and authorized. C2S could be impacted by this.
Thinking about authorization, only followers are allowed to see the note. So, for authentication it would be enough to prove I am a follower. The first thing that comes to my mind here is to prove it by signing the request using http signatures and (one of) my keys from my profile.
Now, authentication now can also happen via Authorization: Bearer ... header, which is where a bearcap could come into play. But this would mean that I somehow need a token from remote server, me the user, not my instance. The remote instance could give my instance a token and my instance could give the token to me. But it’s the stil the token of my instance.
This sounds strange to me. Why impersonating my instance when I also can prove that I am a follower / recipient?

Coming back to what happens if the sever receiving the Announce does not have the object.
It will try dereferencing it and will fail with 403 or 404, due to missing authentication.
Knowing that private objects can exists, the server can retry with authentication.
But which? Above, for C2S proving to be a follower. Now, for S2S, the server is not a follower, but proving that it is the instance it says it is (via http signature) the remote server can authorize the server based on whether there are recipients on that instance.
Authentication could be via bearcap again, but why use a token when I can prove who I am fine without a token?

Looking back at the Mastodon example it seems like the note has the recipients, if that’s wrong my argumentation will fall. (Because then I have no idea how the authentication who is allowed to see the note would work.)

So, if this is a use-case for bearcap, why using bearcap here?

This is out of curiosity, trying to understand the background, not really saying it’s all wrong how it’s done. :slight_smile:

For me it’s easier to think about it when trying to solve the problem myself.

You’re focusing on a single use-case. The primary use-case in ActivityPub is that inboxes can become bearcap URIs, but that’s too invasive right now. Hardened access-control on AS2 objects is what is possible in the short term, but that’s not the only use case, and that’s all I am going to say on it. If you want to know more, somebody else can surely link you to the literally dozens of documents that have lead to this point.

I want to know more, yes. If someone could drop me some hints, I would be really happy. :slight_smile:

I tried searching, but it’s a challenge finding the right keywords for that. ^^
But I found some of your blog posts, I think that’s a start. :smiley:

Since I don’t want to work on LDAP I will give another example where this construction is useful.

Mastodon (and Pleroma because of Mastodon API compatibility) has a followers-only scope that presently works like this:

  1. Alice sends a post to her followers collection.
  2. Bob replies to Alice’s post. Bob’s post goes to Bob’s followers collection.

When in reality, both posts should probably go to Alice’s followers collection, since one is a reply to the other. But because of the way signatures work, this is not really easily accomplished.

However, with a bearcap, it is possible for Alice to forward Bob’s post onward to her followers collection, as the post can be kept private with access restricted only to those Alice chooses to forward directly to. This can be enforced by expiring the access token fairly quickly (within a few minutes of issuing it to Alice), so Bob’s reply couldn’t be forwarded further through the network by Alice’s followers, as access has been revoked after a short amount of time.

But really this stuff’s main advantage is removing the bottleneck of HTTP Signatures in the delivery path while still ensuring that authority is present, e.g. pre-authentication of peers.

(Yes, I assure you, I have heard the GOOD NEWS about Linked Data Signatures. Needless to say, they aren’t so great for confidentiality.)

Similar to if Alice had an AP-enabled blog?

  1. Alice writes an blog post, which is addressed to Alice blog’s follower collection
  2. Bob adds a comments. Bob’s comment is addressed to Bob’s followers. (And Alice, as the author?)

Now, the problem here is that the comment additionally should be addressed to Alice blog’s followers too?


And curiosity question(s) below. :slight_smile:

I do not understand where the problem with the signatures lies. Like, the signature says “this request came from $user”.
So, couldn’t you add the follower collection to the recipients, and forward the reply/comment using the collections inbox?
When the remote server receives the request, sees the author (and can verify it by the signature), that it is a reply to an object that was also address to that collection, it could accept that as authorization?

(Edit: From my understanding right now you want to completely replace http signatures for normal object federation, so this is kind of a stupid question.)

But I guess that’s going to deep into the details of a specific use-case :smiley:

So instead of federating objec / message passing via POSTing to inboxes having an Signature in the http header, the object would contain an Authorization: Bearer ... header based on an inbox bearcap URI?
How do you get that bearcap URI? nvm

Yes, precisely. That way the conversation thread remains consistent, instead of broken by access violations.

Alice’s signature doesn’t prove Bob actually wrote the message. To prove that Bob actually wrote the message, the inline version either has to have an LDS signature (not ever happening in several implementations due to confidentiality concerns) or it has to be discarded and refetched from Bob’s server. That’s where the bearcap comes into play in this case.

When using bearcap with inboxes, yes, that’s the idea. And yes, that blog post has a very rough sketch on how a capability reference would be acquired, however it predated the conception of the bearcap construct, so where you see capability URLs in that example, think bearcap URIs instead.

Ah, yes, I forgot about the relaying to other instances :smiley:

1 Like

So how would look this with bearcaps/OCAP?

  1. Alice posts a note, addressed to her followers
    1.1. relayed to their instances by her instance
  2. Bob posts a reply, addressed to his followers, Alice and Alice’ followers
    2.1. relayed to his followers instances and and Alice instance by his instance
    2.2. relayed to Alice’ followers instances by Alice’ instance

Alice’ instance relaying is validated by other instances by a capability Bob attached to the reply that allows Alice’ instance to relay his reply to Alice’ followers on his behalf.

Let me mess up some things:

The capability

{
  "id": "https://instance-b.com/cap/abc-def-ghi",
  "type": "Capability",
  "capability": [
    {
      "object": "https://instance-b.com/bob/reply",
      "capability": "relay"
    }
  ],
  "scope": "https://instance-a.com",
  "actor": "https://instance-b.com"
}

The reply, as sent from Bob’s instance to Alice’ instance and from Alice’ instance to other instances.

{
  "type": "Create",
  "actor": "https://instance-b.com/bob",
  "object": {
    "id": "https://instance-b.com/bob/reply",
    "inReplyTo": "https://instance-a.com/alice/note",
    "to": [
      "https://instance-a.com/alice",
      "https://instance-b.com/bob/followers",
      "https://instance-a.com/alice/followers"
    ],
    // [...]
  },
  "capabilities": [
    "https://instance-b.com/cap/abc-def-ghi"
  ]
}

Taking a step back from your detailed analysis, the ActivityPub protocol itself only has two behaviors: POST (to in- and out-boxes) and GET (to any IRI), I’ll just call them “delivery” and “fetch” for my sanity. (It specifies a lot of application side effects for the Social Media domain, but I view that as a separate “spec-within-a-spec”, so I’ll be ignoring those). The delivery aspect is immediate, causes application side effects based on the ActivityStreams value, and the owner of the data now has copies floating about. Nothing then stops the Byzantine Generals Problem: malicious actors, once having data, can do as they please. What we do control is what cooperative nodes do (isolate & defederate malicious nodes). Anyway, a fetch can happen at a later point in time to get a canonical copy of an ActivityStreams payload. Also, importantly, a delivery could be for an application reaction to a previous action, which can include a different scope or capability than the original action.

So let’s dive into HTTP Signatures and try to empower a user with them.

Fundamentally, HTTP Signatures are used today to tie a specific HTTP fetch or delivery to a specific user. It is a “push” mechanism where the one doing the action (fetch, delivery) is pushing their authorization and identity (however they define it) onto the receiving server. The receiving server can always build access controls on top of this to solve their problems, but it will always be built on top of the fetcher/deliverer’s concepts (Like how they manage keys), not their own. An HTTP signature does not let the receiver do finer grained permissions than the peer lets them. If, say, a peer server decides a group of 1 nice person and 10 assholes are identified by the same key, the receiving server can’t do much about that.

Additionally, when fetching, the receiving server must apply the HTTP signature key to some sort of access control list, and change it if it is granted/denied at a later time. Doable, but subject to the previous major caveat / security loophole (sender smearing a key across multiple users).

But! Even though it’s doable and bad already! If a user is delivering a reaction, then the receiving server must roll a custom access-control based solution to map whether a key can do a certain re-action (Announce, Create in response, etc). And then federate that ACL to peers. Because reactions being delivered to other peers don’t necessarily include the original-acting-server, so peers don’t know locally how to verify if reactions are legitimate! Ew.

So the problems are:

  1. Senders control who-gets-what key (key management) for fetches and deliveries, not data originators
  2. Data originators must roll their own ACLs, on top of this shaky ground
  3. Everyone needs to ask their peer if this HTTP Signature can indeed do this action.

So, that led to OCAP-LD (archive link). This involves sticking an object capability in the ActivityStreams payload itself. Object capabilities solve mainly the first problem above:

  1. Data originators manage keys, and can delegate – but can revoke any of these, including delegation!
  2. Data originators use revocation instead of ACLs. Time-based revocation are trivially supportable.
  3. Peers can simply validate signatures, and if more complex revocation logic is needed, be supported.

However, OCAP-LD stuck these capabilities on the JSON-LD object itself:

{
  "@id": "https://example.com/very_malicious_to_deserialize",
  ...
  // Pick >= 1: 5 TB of noise, known attack against network libs, etc
  ...
  "proof": ...
}

…and depends on JSON-LD Signatures! Which requires processing the entire JSON-LD document. Which is computationally expensive. And I’m willing to bet, somehow Turing complete. So for these reasons, trying to modify the ActivityPub protocol behavior based on the value in a payload instead of leveraging the next-down OSI layer is problematic, design-wise.

Also! Fetching this object itself is problematic, as adding/removing/changing OCAP on this object necessitates Update Activities on the object itself, which presents challenges in delegation (ex: can delegate, but cannot give Updates?). So this adds two new problems:

  1. JSON-LD processing for JSON-LD Signatures
  2. Key delegation (protocol side effect) and Update ActivityStreams vocabulary (application side effect)!

So, OCAP is still a great concept, just having it in the JSON-LD payload is a bad idea. What is next?

Well, when delivering a reaction or fetching, both require an @id:

  • reactions require an @id to respond to
  • fetching requires an @id to dereference

In this case, it makes sense to use JSON-LD’s very-purposefully-loose @id, to build on something new that can leverage OCAP. Then upon a network request, without requiring knowing anything about the payload, a receiver can then somehow get a token that represents an OCAP and apply reasonable rulings.

But! And this is the keen insight that @kaniini (and @cwebber?) and others have worked on, it instead of being an OCAP directly, can be an OAuth2 bearer token and the receiver manages the scope of the bearer like any other OCAP. I myself am a little fuzzy on the specific details on how bearer scopes and such will align with peering for certain activities (Resource Owners and Resource Servers in OAuth2 parlance). Hence, bearcap: stuffing an OCAP in an @id. :slight_smile: So revisiting all problems, 1-5, again:

  1. Data owners control key management
  2. Management is just OAuth2.
  3. Checking a peer for more OCAP details is all that’s left for me (a non participating member) as an unknown solution
  4. No deserialization/processing of JSON-LD required.
  5. Modifying/removing/creating OCAPs is simply bearer token management, independent of application side effects.

Hopefully this provides enough background to answer some questions.

Also, I’m welcome to being corrected by other members of the community who more actively participated in these discussions.

1 Like

You’re overthinking this. Servers right now refetch the object when the actor (which must match the signature) does not match the object they are acting on. That’s all that really matters here – authenticity is verifiable because they can go refetch the object, and this behaviour is useful, because, well, you can’t get any more authentic than if Bob’s own server is serving the object to you. So, in short, what Alice is actually doing is advising her followers that a reply exists on Bob’s server, and that you can use a bearcap URI to retrieve it.

It is important to note that a bearcap access token is not necessarily an OAuth2 bearer token. We just reuse the same mechanism. It is, however, possible to leverage an OAuth2 system with bearcap URIs.

At any rate, the use of a bearcap as a referent URI is built on the feature of JSON-LD that a referent URI does not have to match the final id of the object.