Have you had a look at how your thinking on the division of labour between server and client compares with how @stevebate is doing it in Flowz?
I admittedly haven’t looked into Flowz so I can’t comment on it specifically. My comment there was mainly about things like timelines and reply trees, especially doing them fast enough to be as snappy as, say, Mastodon.
Say, if you need to do timelines on the client, you’ll need to re-load your inbox and filter every activity which isn’t relevant (follows, likes to other people’s posts, someone messing around with their collections) to extract Create
s and Announce
s (and whatever else you’re interested in). This can be done fully client-side if you try, but it would be slow enough to impact UX. Doing this once, on activity ingestion time (which implicitly requires a consistently online server to handle activities any time they may come in) and serving a pre-filtered timeline to the client as-is (potentially just as object IDs to be loaded by the client later on) would be more efficient.
Reply trees are similar, but the problem there isn’t so much the computational overhead but “request waterfalls” as you load each individual reply collection to fetch nested replies (or parent posts through walking inReplyTo
chains). The context
collection can partially solve this, but loading the entire collection to just get the new replies (or ones in the middle, so you can’t iterate it backwards) would be just as slow. A server could pre-compute a tree ahead of time and serve the entire tree as-is in a single response (bar pagination).
This part assumes all remote instances offer reply collections, which they don’t. Pleroma/Akkoma I believe are the largest implementations not supporting them. A server could compute reply trees by matching inReplyTo
in incoming Create
s, allowing them to be more complete. If multi-user, it could also index replies by people other people in your “client instance” follow, making the impact of missing reply collections less of a problem.
Additionally, “client instance” administrators can moderate the posts indexed by their clients, which I expect will be the primary avenue where remote content moderation happens.
A server would also be required to allow functionality that needs fast response times, like non-locked accounts (that automatically Accept
their Follow
s) or things like GoToSocial’s interaction approvals (assuming you don’t intend to manually review every Like
that comes your way).
There’s also a data size aspect to this. Unlimited mobile data still isn’t a thing in parts of the world (including where I live). Even if you could tolerate the computational overhead or the waiting involved with offloading these to a client, you’d still need to download all the data you discard, which directly translates to how expensive it is to participate in the network. In addition to the pre-filtering, a server could use a specialized API that offers more compact, possibly non-standard, representations of data for extremely low-bandwidth uses.
As far as I’m aware, the only way to not run these on a server-hosted client is to bake them into the C2S server itself, which severely bloats the scope of the C2S server, and reduces client flexibility as they now have to work under the constraints imposed upon them by the C2S server, creating essentially a “ActivityPub flavored Mastodon API” situation.