For propagating updates:
On the surface the maximum number of polls would be 1 request every two seconds per user. Boosting should not change this, because it is still social interaction, so it ist part of the calculation.
A very naive approach would be to see that my browser fires 20-30 requests per second when accessing the website of the Washington post. So up to 60 users, this should work even with very inefficient formats without any optimization.
I am not sufficiently well-versed in the interactions between mastodon instances to estimate how well they can reduce the requests by pooling per instance.
The worst-case would be following one user per instance, so nothing could be optimized on instance-level.
For the stored data:
In the implementation I wrote, adding a trust connection to a known ID requires 3 bytes (u16 index to id and i8 trust) in memory and 14 bytes in non-optimized plaintext storage on disk.
Adding a trust-connection to an unknown ID requires roughly 100 bytes in memory (depending on the representation of the reference to the ID) and 120 bytes in unoptimized plaintext storage.
this is not optimized yet, more than 60% of the project is still to do, so this is just a prototype
I tested that in my current implementation with 16k IDs with 200k trust relationships from the real Freenet Web Of Trust data (3.2MiB on disk, using only integers instead of URIs to reference IDs to reduce the size from 300MiB to something usable) this currently requires 120 MiB in memory (most of that due to delayed garbage collection) and adding all trust relationships one by one without score recomputation requires 60 minutes of CPU time.
The full score computation is not yet optimized, though: Calculating all scores for all IDs a very well-connected user knows up to 9 steps away currently take a few seconds.
Keep in mind though, that this is still missing essential optimizations, so it gets slower with increasing size of the WoT. Missing are:
- Incremental score recomputation from the existing WoT-Code in Freenet (gets recomputation on adding down to the tens of milliseconds area, but is quite complex) and
- bounded polling and pruning stale IDs from the scalability calculation (pruning stale IDs reduces the number of IDs to track, so it speeds up everything).