Following process in C2S & S2S Servers

If I implement an ActivityPub server (MyServer) that offers both S2S and C2S, then some processes become exciting.

  • My Server receives a follow activity in the inbox of max and stores it in max’s “unaccepted following requests” collection

    • or the follow activity is simply in max’s inbox!
  • A) The client (max) sends an accept activity to MyServer with the follow activity as an object.

    • A.1) MyServer sends an Accept Activity with the follow activity as an object to the actor that made the request.
      A.2) If MyServer maintains an “unaccepted following requests” collection, the follow activity is removed or moved to the following collection.
  • B) The client (max) sends a reject activity to MyServer with the follow activity as an object.

    • B.1) MyServer sends a Reject Activity with the follow activity as an object to the Actor that made the request.
    • B.2) If MyServer maintains an “unaccepted following requests” collection, the follow activity is removed.
  • C) The client (max) sends an ignore activity to MyServer with the follow activity as an object.

    • C.1) MyServer sends NO Activity.
    • C.2) If MyServer maintains an “unaccepted following requests” collection, the follow activity is removed.

If there is no “unaccepted following requests” collection, then it’s a bit harder to find unaccepted following requests
as you always have to search for accept, reject or ignore activities in the outbox for the follow activity.

Feedback welcome

I think there should be C2S API equivalents for most Mastodon API methods.

“unaccepted follow requests” collection ↔ /api/v1/follow_requests

2 Likes

the actor profile should have an additional endpoint, e.g. followRequests

{
  "@context": ["https://www.w3.org/ns/activitystreams",
               {"@language": "ja"}],
  "type": "Person",
  "id": "https://kenzoishii.example.com/",
  "following": "https://kenzoishii.example.com/following.json",
  "followers": "https://kenzoishii.example.com/followers.json",
  "followRequests": "https://kenzoishii.example.com/follow_requests",
  ...
}

However, I don’t necessarily think it makes sense to use the Mastodon API as a guide.

FEP-4ccd?

1 Like

it is becoming more and more apparent that we need a way to query and filter Collections. i don’t think the solution is to create a new property / special collection for every single type of activity as a subset of every collection.

assuming some SPARQL endpoint were present (perhaps under endpoints? perhaps a property of a Collection? perhaps available at the collection IRI directly?) then you could “easily” find all follows in your inbox with a query.

what’s also missing is a “follow your nose” way of going from a Follow activity to its resulting Accept or Reject, which could be signalled by the result property (provided that we also had a way to signal that some activity is the “result of” another activity…)

if we had those things, we could use a query that could look something like this:

PREFIX as: <https://www.w3.org/ns/activitystreams#>

SELECT ?item
FROM <https://example.com/actors/1/inbox/items>
WHERE
{
  ?item a as:Follow .
  MINUS
  {
    ?item as:result ?result .
    ?result a ?resultType .
    FILTER ( ?resultType IN ( as:Accept, as:Reject ) )
  }
}
1 Like

If I’m understanding this correctly, you don’t need the special result property. The object property for the Accept, Reject, Ignore can be used for the relationship to the Follow activity in the SPARQL query.

I like the idea of an sparql endpoint and think rdfpub will again provide on in the future.
But I don’t expect the majority of client developers to use it, unless we have a list of queries and description for copy&paste somewhere

How many developers will be prepared to implement SPARQL with any degree of completeness?

When I realised that I was heading in that direction, I threw out a load of my design and started over. Fascinating though the problem is, I’ve got far more important ones to solve.

(I’ll agree that we could use some kind of query mechanism, though.)

The alternative to SPARQL endpoints on the server is to page through the entire inbox collection and perform the query locally on the client side.

For monolithic implementations where your “instance” has the database available, you don’t need this.

For generic servers at present, clients are expected to do all the work, and therefore end up having to fetch, page, and cache the entire collection.

For generic servers that are LD/RDF compatible, I think SPARQL is a natural extension to implement.

For clients, I think they are free to use what is available and understood. There is an inherent negotiation process between client and server, where the server provides some set of functionality, and the client has to figure out which mechanisms are available to it that are also understood.

A client that chooses not to understand or use RDF is therefore going to want alternative mechanisms and extensions, which means that server implementers are now in the situation of not only implementing alternative mechanisms, but also of waiting for someone to actually specify those alternative mechanisms. In the narrow case of “pending follows”, there is FEP-4ccd as mentioned above. For other use cases? That means a FEP and probably extension property for a filtered collection view, multiplied by each use case. This would lead to an explosion of complexity that ends up being painful for both client devs and server devs. And also for spec people who end up reinventing all manner of wheels.

To me, it makes more sense to look toward what already exists. A single mechanism that handles unbounded and broadly similar use cases? That seems better than unbounded mechanisms that each handle a single slightly different use case.

1 Like

I agree. I built a (private) LD/RDF-based server and provided a SPARQL endpoint. The challenge I encountered was how to implement access control. The SPARQL endpoint effectively allowed access to everything. There might be some SPARQL libraries that provide hooks for access control, but I don’t know of any.

1 Like

i think the current thinking is to handle auth(n/z) separately at e.g. the HTTP layer, using the Authorization header or similar. in a federated environment where a query might hit multiple endpoints, there’s an open issue for this: add Authentication to Federation · Issue #117 · w3c/sparql-dev · GitHub

1 Like

I agree that querying this would be useful, but I share the concern that for half the developers we’re trying to interop with, JSON-LD is already a begrudgingly-ignored aspect of the wire format. Making both sides of the interop stand up a functional SPARQL endpoint might be politically costly or impossible even if it’s technologically the simplest or best solution to this particular problem… is it possible to sketch the problem out in more detail to brainstorm alternatives, corner cases, and fallbacks?

I was referring to authz rather than authn. The latter is a relatively easy problem to solve (although there may be complications with varying authn support across a set of federated SPARQL services, as discussed in the issue you referenced).

I’m also talking about triple-level authz, not graph or service level. If the SPARQL endpoint is read-only that simplifies the problem a bit, but it’s essentially the same issue. I think the triple-level authz is going to be required to implement typical AP data access constraints in the SPARQL endpoint.

I don’t know of SPARQL libraries that support that level of authz and I think that implementing a custom SPARQL engine with that capability would be a major project.

sorry, i typo’d authn instead of authz

in any case, triple-level control is definitely a hard problem, and it’s also one that matters to the social web – A mechanism for showing additional profile information to selected audiences · Issue #457 · w3c/activitypub · GitHub

Does SPARQL require a special type of database?
I wonder what are the alternatives. Perhaps jq can be used to filter collections?

i think it assumes / works best under an environment where you’re using a triple-store to store the triples directly? so it could work with a graph database, but a more classical relational database might have some additional work to do to convert from whatever schema it’s using into a more node/edge oriented data format.

it’s probably less efficient though? basically it seems to me like triple-stores and graph databases are just more “optimized” for this, but you can still do it with tabular data, just less efficiently – the article linked above uses “recursive queries” to navigate through the graph, and i don’t have performance numbers for how that would affect performance.

this implies having access to the whole collection locally in JSON form, right? so it’s not something that’s fit for clients to request from servers, but it could work for the server to internally calculate a filtered collection view. but then again, the server could use anything it wants to use here, as long as the output is consistent.

Not necessarily. I’m guessing @silverpill might have meant using the jq query language. If so, that could be done on the server side. However, like SPARQL works best with RDF data models, the jq language is going to work best with JSON. Many AP servers do not store data in JSON format. However, jq (or maybe a variant of the MongoDB query language) would probably be easier to implement than SPARQL for non-LD servers.

Yes, I was referring to jq language. I’m only familiar with jq command line utility, but apparently there is a whole ecosystem around it, including a postgres extension.