Maybe propagating blocks can be made clearer by directly calling the operation “report”.
Edit: Also needed to install wisp, will update again if I still need help.
Pulled and made sure I have those installed, but getting the same error. wispwot bootstrap error · GitHub
I noticed install: is commented out in Makefile.am. The readme also says to look at INSTALL for more info but I didn’t see one.
I’m not too familiar with sourcehub but tickets seem like they may be helpful. Does it do pull requests as well? Github/gitlab also an option for making it easier to work together on this code.
Created a REST server using Python/Flask and look forward to connecting it to wispwot and/or implement some trust calcs.
$ curl http://192.168.49.198:20202/
[
{
"name": "wot-server",
"description": "Server to support moderation via web-of-trust. Visit github.com/weex/wot-server for more info."
}
]
$ curl http://192.168.49.198:20202/status
{
"uptime": "29",
"free": "30308.875 Mb"
}
Got through most of xor’s paper and decided to implement something in python using the networkx graph library. Haven’t been able to run wispwot and this is probably not the same algorithm but the scores it generates seem plausible for an initial full run.
real 0m1.429s
user 0m1.328s
sys 0m0.095s
Could you try simply running wispwot from the source directory? autoreconf -i; ./configure; make; ./run-wispwot.w --test
Very nice! One thing I wasn’t sure of while reading: Do you also use a node as part of a path if the trust is negative? I see that you’re cutting ranks for those with value < 0, but it looked like you’re still allowing networkx to use them as intermediate nodes.
It seems I am! I guess I need to leave those (-) value edges out of the graph before running shortest path.
Working now on the incremental update with the improvement from the paper and wispwot is running now for a while. Is there a way to run that for just one OwnIdentity? Looking forward to compare results either way.
That command worked to run the test suite which took a couple of hours and it looks like all the tests passed except for an import csv one. What I’d like to do run a test that generates scores based on that csv and output to another csv.
For my python implementation, I realized it needed unit tests so started adding some and found a couple bugs. I pushed an interim version that passes on the smallest trust network I could generate. Next step there will be to finish the incremental calculation.
I’d love to see that each implementation generates the same scores for the big file as well as for a small network that has all of the kinds of links that one would expect.
Edit: Pushed a version that covers all the basic cases of an incremental trust update. python test.py
runs the tests.
I did not catch up with this thread, but FYI via this post of Alexander Cobleigh (boosted by @wakest) I was made aware of this interesting Ink & Switch project:
There will be a talk about the subject by Karissa McKelvey at the StrangeLoop conference:
Hello all!
I was notified on fedi by @aschrijver of this thread and figured I would drop by to quickly link & mention my Master’s Thesis research into using transitive trust for implementing content moderation in chat systems.
Fundamentally what this work proposes is implementing a subjective content moderation system, which is built ontop of explicit (weighted) trust assignments. To make the weights actually work, I propose using human-significant labels (words) instead of the numbers they encode. To make the system more adaptable, a notion of trust areas is introduced, which lets you have separate trust pools using the same identity (e.g. one of your moderation network, another for sourcing your music recommendations). There are also some nuances introduced:
- anyone that you trust directly (even with a low trust weight!) is trusted, therefor they are included in your trusted set of e.g. moderators
- the rankings that are calculated are not “social scores”; they are used to split the entire networked cohort into a trusted set and a (sufficiently) untrusted set (by splitting the rankings using a clustering algorithm, ckmeans). You can still get at the entire chain of trust if you want to.
TrustNet lacks negative trust, as that would be modeled by actions taken by the actors which are trusted. If Carole trusts Alice, and Alice blocks Bob, then it is the trust between Carole to Alice that is being modelled by TrustNet, and Alice’s action which is transitively being applied by Carole as a result of that trust. There are however explicit, non-transitive, distrust statements you can issue to make sure someone you distrust will never be part of your set of trusted participants.
More info
- A shorter article summarizing my work: alexander cobleigh - cblgh.org
- The TrustNet nodejs implementation & repository GitHub - cblgh/trustnet: a flexible and distributed system for deriving, and interacting with, computational trust
Hope this is interesting food for thought for you all!
Welcome and thank you @cblgh for sharing your thesis and with an implementation as well!
We’re still a ways from a WoT-based moderation strategy connecting to something like Mastodon but when that happens I’d love for it to support your strategy as well as the Freenet one.
I don’t know if we should create a mailing list or have a hangout or a hackathon, but I’d love to accelerate it one way or another. Open to ideas as far as how to draw attention and development energy to this.
Thank you for sharing!
One thing I’m missing while reading the description (not the whole thesis, sorry :-/ ) is what I can do if someone abuses network-hides. Basically if someone abuses his or her strong position in the network to censor.
If you want to test your algorithm against a real-life censorship-attack, you can use the under-attack part of the anonymized freenet wot dataset on figshare: The Freenet social trust graph extracted from the Web of Trust
It’s kind of a tricky question that gets into the social aspect of communities, rather than purely technical considerations. In TrustNet, you can always demote (meaning: remove your trust for them and/or signal them as distrusted) such a person with the end result that their actions will cease to have an effect on your view—and that of others trusting you, if you are the main link connecting them to the now-untrusted person.
Going a bit deeper into the topic, if you have a moderation system that uses TrustNet and which surfaces moderation messages such as x hid y (for <reason>)
then that acts as a natural mechanism to check powers. That is, if you notice one of the people you are delegating moderation to starts to hide lots of people you like to chat with, well, then you will probably have second thoughts about the hider.
Ultimately, if someone is abusing their strong position in a network it is a social concern that needs to be handled with social measures. Calling someone out for their power abuse is one often used facet in democratic society. As noted above, it would be important to somehow surface the moderation action that is being taken and influencing your view—without that feedback the power misuse might got unnoticed.
Adding a couple more resources that came up today:
Bit side-track from the topic, but just wanted to mention this neat small-tech web-of-trust project called “Interverse” I bumped into via this Libre Solutions Network toot:
Good article here, on why the GPG web of trust failed. tl;dr spam attack
I’m going to work on some algorithms for follower based web of trust in Nostr. But same could be applied to Fediverse. Only problem is that the algorithm is projected to take 69 hours to run. Here’s my start on it, code can be run with node index.js