phwwwwā¦ the break down in ātrustā is a problem, the #openweb is trust based, so this is a break-down of the #openweb itself, not a good path to be on, ideas please?
This is a path, but the Fediverse is people to people, thus built on trust relationships, here the is an understandable pushback on tech fixes to replace this human trust building, where in #nostor and #bluesky this tech fix path is more central to the projects.
The ādebateā in fluff/spiky, we build (tech)bridges to cross this social diversity. A good outcome
I 100% agree that we need to build trust. Iāve written a white paper here on 1 possible way that we could tackle the trust problem.
Flagging, reporting, human moderation, and law enforcement can and should be used to address such as abuse, bullying or harassment. But unfortunately history has proven that these tools are not scalable, and inevitably lead to disagreements about what should/should not be allowed by mods. I believe that these tools are ill-equipped to deal with the mass spread of misinformation.
I agree with @melvincarvalho that we need a Web of Reputation. This is how our ancestors have dealt with trust for countless generations. āMy word is my bond.ā People used to be disincentivized to lie because they knew that it would hurt their reputation which would result in real consequences like being denied jobs, becoming social outcasts or even exiled. But now communities are so large and so much of communication on the internet is anonymous that now people can spread lies with little to no repercussions.
I believe that if individuals are better informed about how much the community trusts something, then they can make better decisions about what to trust in.
@hamishcampbell I also agree that tech fixes are not a replacement for human trust building. This proposal is indeed a ātech fixā however it is not intended to replace human trust building, but rather to better inform humans so that they can trust each other.
Briefly reading through nostrās page on their āTrust Rankā system, @melvincarvalho , it seems you all have similar ideas.
I can also contribute one more piece of āprior artā from another context that might be useful: the Trust Establishment data model opensourced at DIF by a collaboration between TBD/Block and Indicio, which work, respectively, on Web5 tooling (including the personal data stores used by BlueSky) and DIDComm (another P2P architecture where all actors mutually authenticate with DPKI).
I think the trust/trustworthiness distinction is crucial here, because the software industry and engineers often use the word ātrustā to mean something closer to trustworthiness (or connection-worthiness, inclusion-in-dataset-worthiness, trust-in-my-own-judgment, trust-in-signals-aggregated-about, etc). A good tactic might be using the word ātrustā for subjective, interpersonal, collective, moderation, policy, etc decisions, and ātrustworthinessā for quantifiable inputs TO those collective-or-individual human decisions. I.e., the system giving me a 0.72 rating on an actor would be trustworthiness, and me setting my thresholds to āhide any actor under 0.75ā would be a trust action.
I also think variously explicit across all these prior arts is a range of serveriness-- architectures at the p2p end of the spectrum tend to keep most trust decisions out of the hands of āserversā or aggregators, while ActivityPub is at the other end, using servers to cluster/aggregate both trust and trustworthiness decisions.
Another useful heuristic would be the olā ABCs of social systems rubric - Actors, Behaviors, and Content. All three require different metrics, different authorities, and different liabilities! For instance, actors judging actors (purely p2p basis) can be aggregated up to a useful graph-crawling based web-approach, but content is usually jurisdictional and relative, in most cases work best done by authorities (civil society in partic), while behavior (incl spam policies and inauthentic behaviors, sold accounts, etc) requires a relatively long and complete history for a given actor to assess. Part of the elephant in the room is that while content standards have historically been instance-level and local in the hometown/masto tradition, ābehaviorā just flat-out requires commercial scale to do well-- if todayās fedi instances and p2p networks get flooded by inauthentic accounts, spambots, sleeper-accounts (that exist to trick behavior-detection which relies too heavily on duration/age of account), etc, weāll be quickly turned into the āfreeā tier of a freemium model, where commercial mega-services are refreshingly free of inauthentic behaviors and actors
In any case, I think our only hope is finding ways to cooperate across architectures, and pooling resources. Part of this would be p2p architectures at least passively being able to consume and parse trustworthiness signals and ātrust reportsā (scores) produced by central authorities, an in opt-in/elective way at least. Here the Trust Establishment data model above might be illustrative. It nominally relies on being able to refer to all actors by a URI (a DID in all the examples, but an AP actor or public key or any other URI that translates to a public key would also work), and on VCs for the signing mechanics, but thatās not the most opinionated tooling requirement.
@hamishcampbell Sorry Iām not understanding what you mean by saying "implemented āunspokenā "
The ABCās is an interesting idea that I need to consider more. One thought I have is that content can also have some overlap with behavior as well. For example, what is a Mastodon thread. Well, itās user generated content, but it is also behavior.
The longterm/short-term distinction I think is incredibly valuable. Part of the issue is that identities are too ephemeral online. Anytime someone wants to do something shady, they can simply make another account. These proposed trust systems have the potential to greatly mitigate these problems by incentivizing people to make longterm accounts that establish trust over time. If they have shorter term accounts they have less privileges because they have less trustworthiness. This also lowers the effectiveness of spam, and bot accounts. I suppose some of these techniques have already been used (like considering how long an account has been active, when they last logged in etc.) but I think it could be much more effective if there is a longterm history of other users who have āvouchedā for them by publicly, accountably, declaring their trust in their content.
@bumblefudge Do trust reports have to be centrally produced? Could they not also be decentralized? Iām not aware of a technical limitation that would make it impossible.
DPKI = Decentralized Public Key Infrastructure. I.e., any key-based system that involves self-issued/enduser-generated keys, which includes nostr, bluesky, secure scuttlebutt, ipns/libp2p, and lots of other P2P systems, as well as DID systems (a subset of DPKI systems that use a common export format called a DID)
DID = Decentralized Identifier, the aforementioned export format for key material that allows interop and thus verifiable portability of signed data. Itās a young W3C standard not used too much in production systems except debatably by bluesky or even more debatably by cryptocurrency wallets.
VC = Verifiable Credentials, another young W3C standard that strives to make atomic blobs of information both verifiable (in the sense of signed) and portable (including RDF data via JSON-LD, or native JSON data, depending on your tooling/native context). Ironically the biggest production deployment of VCs outside of niche fields like supply chain isā¦ LinkedIn employer verifications for Azure customers. Which, for as corporate as that sounds, isā¦ actually a game-changing social-web technology, an anti-fraud primitive, and a small win for data portability.
My intention wasnāt to bring those adjacent technologies into scope for your design process, though, and I definitely donāt recommend spending too much time down those rabbitholes, they go very very far! I just thought if you were already passingly familiar with those fields of decentralized design it would help contextualize the trust list spec, which I hope is more directly relevant.
Oh, and as for ābehaviorā in Camille FranƧoisās taxonomy, I think a Mastodon thread qua data is content in the static sense (at rest, it is content on a hard disk, when displayed, read or migrated between servers, itās content in transit). Who posted or boosted it and, more importantly, how they did so (on mobile, at 3pm, while walking around the city, in the middle of a doomscrolling session) is what that WG report defines as ābehavior,ā and in the classic commercial social web context, itās all the telemetry and account surveillance metadata, and the inferences (in the creepy adtech psychological sense of "behavioral advertising) that can be drawn from it. I dare say Mastodon hasnāt really concerned itself with behavior too much, because the human-scale moderation model catches most behavioral problems; itās the rapid-scaling platforms and the less-moderated platforms that have to worry about detecting bad-faith accounts, false accounts, sold/stolen accounts, etc.
And I totally agree that DPKI/self-authenticated systems incur a much higher risk of negative network effects brought on by ephemeral accounts, sold/taken-over accounts, etc. But itās always a cat-and-mouse game, adversarial in the machine-learning sense: if you hardcode a system to weigh accounts over 1 year higher, youāve just raised the blackmarket price for a 1-yr-old account, which will be absorbed quickly if thereās already a healthy market for those! Incentives are such that social systems just get socially engineered if you make it worth some clickfarmerās while to do it. Thatās why trust ratings should always be floats and never integers, as the ML piles up on both sides of the arms raceā¦
Agreed that it will always be a cat and mouse game. Whatever social network is created, it is inherently valuable to exploit it, and if you increase the difficulty of exploiting it, you have simply raised the blackmarket price.
Still, Iām hopeful that some sort of trust history ledger will make it so cumbersome to fake that the majority of bad actors will pick lower hanging fruit and the human mods will have a more manageable workload.
Had a few thoughts while reading over this post. Iāll preface this by saying itās not really in opposition to the attempt at a tech fix, but I think I have a different philosophical take on some of the issues you raised.
This is definitely a problem with monolithic social networks like Facebook and Twitter since they need to effectively scale globally. In a federated network, the cap on moderation scaling can serve as a cap on a nodeās membership. Once a node has so many members that it canāt be effectively moderated any more, itās time for a new node.
Additionally, disagreements with moderation decisions shouldnāt matter (in my opinion) in a federated network since when they arise, the dissenters should be able to easily move to a new node or start their own moderated in the way they choose.
Kind of goes back to my previous point: the problem of size is, as I see it, solved by federation because moderation scales with membership by virtue of nodes capping membership to a size that they can effectively moderate.
My current thinking around this is that the problem here is that the tech is not actually matching the real-world social web. That is, our online social group can far exceed Dunbarās number and although that has pragmatic benefits (especially for content producers), my opinion is thatās the root of all this trust problem. If weāre willing to build our network around people we know on a personal level, it seems like the trust problem solves itself since you exclude the really determind bad actors (criminals, malicious state actors, etc) since you donāt know them personally.
And finally to address the topic of a trust ranking system itself, I think a fundamental problem such a system would need to solve is personal identification. This is the basis for the bot, mob, and hacked account problems identified in the white paper: itās difficult to build a system of personal trust when the identification of the person is slippery. This is what I believe is solved by personally knowing the people youāre networking with prior to networking with them, since solving that problem with tech becomes unnecessary, while acknowledging that this requirement severely restricts the size of oneās social graph (and my assumption being that this is an acceptable tradeoff, which is probably the main point where many would disagree, especially those who rely on distributing content to a wide audience).
We have a lot of non ānativeā energy pushing this core #4opens foundation down, they mostly do not mean to do āharmā but they are, we need ideas and metaphors to bridge this mess making.
I 100% agree that by limiting nodes to a manageable size, many, if not most, of moderation problems are solved. The sheer moderation load will be less, but in addition, individuals will self-moderate because there are more tangible repercussions if they behave badly. Just like small towns in the real world.
However, I think youāre also right that there is a tradeoff, just as there also is a tradeoff between small towns and large cities. Large cities empower people from very diverse backgrounds to collaborate and create amazing, wonderful things that they couldnāt possibly have created separately. And indeed, this is what makes the Internet so powerful: itās one gigantic city. But the internetās greatest strength is also itās greatest weakness. With size comes, misinformation, mass abuse etc.
The truth is I donāt think that we can only pick one approach (large city or small town). And in fact, itās my understanding that the Fediverse hasnāt picked only one approach. The Fediverse is essentially many small towns (nodes) that are connected to each other. But the vast connection of nodes is effectively a large city.
So while I agree that limiting the size of individual nodes largely solves the trust/moderation problem at the local node level, I disagree that it solves the problem at the inter-node level. Each node still interacts with an unfathomable amount of content from other nodes and still has to make moderation decisions. (Even not making a decision is in itself making a decision.)
Iām not well versed on all the moderation tools available for node owners, but I doubt that the tools available are up to this task. Node moderators could potentially remove specific content, but thatās not scalable. They could block specific nodes. That does seem much more scalable but thereās a huge tradeoff. By severing the connection between two āsmall townsā you are, by nature, limiting the spread of certain information. This could be good e.g. limiting hateful/slanderous content but it could also have downsides e.g. limiting innovation, positive viral content, and what makes the internet special.
I donāt mean to say that these are bad approaches. I just mean to say that each one has tradeoffs and these decisions should not be taken lightly. Itās an extremely difficult task to be a moderator for a node and I donāt envy it. My impression is that most of these tools are like blunt hammers rather than precise scalpels.
It seems to me, that in order for the Fediverse to grow and thrive for generations to come, we need to empower moderators of nodes to make the best, well informed decisions possible as transparently as possible.[1] Furthermore, we need to empower individuals within nodes to feel heard, to build trust amongst each other, and to have agency themselves about what relationships and content they choose to engage with.
As an aside, I think we are all seeing a trend where the internet is transitioning out of monolithic social networks and into smaller communities. I think this is a fantastic trend and I hope that it continues. However, I think it is not without its downsides. Unfortunately, itās led to another trend that I think weāre all seeing: tribalism. As the internet fractures into these smaller communities, there is also fighting between the communities. In other words, the trust problem is not only at the individual level, there is a massive trust problem at the community level as well. In some ways, smaller communities and nodes exacerbates this problem. Individuals flock to communities of other similar individuals (which isnāt inherently bad) but then those begin to silo themselves off from other communities, creating thought bubbles and they learn to hate and distrust other communities. I think this is one of the reasons why people get radicalized. They find a community that claims to accept them, yet teaches them that the other communities are evil. And because those communities are siloed off from neighboring communities, the individual is not empowered to investigate those claims themself and make up their own mind.
Sorry long post. My point is, I think what makes the Internet and the Fediverse so beautiful, is that it is one gigantic community made up of many smaller communities. Both approaches have benefits and weaknesses, and I think itās vital that we use a mixture of both.
Thanks for the reply. You bring up some additional good topics for discussion. Mainly I want to address this one, since I agree I wasnāt really addressing it my previous reply and itās one that needs addressing:
There is no formal mechanism of inter-node governance, and Iād say thatās a feature (a means of formal distributed inter-node governance would probably wind up looking a lot like a DAO from the blockchain world). But youāre right to point out that potentially what Iāll label a āpromiscuousā node (in the sense it federates everything sent to it) is in great danger of federating illegal content.
The main way the fediverse deals with that right now is to share a blocklist of known bad nodes to defederate for various reasons. In my opinion this solution doesnāt scale since hypothetically bad actors can automate the creation of nodes if theyāre determined to spam the network with bad content.
The main example of successful federation we have, email, solved this problem with spam filters and thatās sort of what the shared block-list is doing for the fediverse right now, but itās my opinion thatās not a particularly robust solution for amateur self-hosted instances against larger-scale abuse. The solution in my opinion is to instead invert the permission list to only federate with known good nodes. This will result in poorer discovery initially as each node only federates with a handful of peers, but significantly safer federation. Discovery should also improve over time as replies to unfederated accounts reveal new potential peers that the admin can decide to federate with. This is how Iāve been running my personal node and it is some extra work (solvable with better ergonomics around managing the allow list) but I can rest much easier on the moderation front.
That gets to your next point:
This is an excellent point as well. I actually see this overall as a positive because I like keeping my social feed curated, but as mentioned discovery in my current setup is nearly non-existent and thatās not great since Iād like to expand the scope of content in my feed.
Well, thatās only half true in practice for me. I actually have accounts on two nodes. Oneās the curated one I mentioned and the otherās an account on a public node that does federate widely and I discover accounts and nodes there that I then pull into my curated nodeās list.
To your point about user agency, I think the previous paragraph extracts two areas for improvement:
Itād be interesting if you had some way of discovering content independently of your nodeās moderation decisions. But I see those as effectively directly opposing interests (as long as a user is limited to a single account on a single node). Which gets to point 2ā¦
Itād also be cool if it was as easy to have multiple accounts across multiple nodes as it is to have a single account on a single node. My thinking here is to put an additional application layer on top of what we currently have that acts as a sort of personal āmaster accountā that only you as the user control and feeds content from across your accounts into a unified interface. I guess the golden age of Tweetdeck might be an apt comparison. Anyway, I think this would fit the constraints of both admins making autonomous peering decisions and users accessing content independently of those decisions.
To be clear, what Iām proposing in the white paper is not a form of āgovernanceā. If anything, itās more analogous to a rating system like the MPAA movie rating system in that each individual sees the rating and makes up their own mind. However, unlike that system, this would be objective and verifiable. You could ācheck the mathā and see exactly how the rating system was calculated. (This would also be vital because different nodes would have different inputs and therefore calculate different ratings).
However, what Iām proposing is not incompatible with governance and I think that governance could be done in a DAO. But there are many other details that would need to be figured out.
Pushing defederation from #meta is not wrong in sentiment, the #dotcons are vile and cons. But is wrong from a practical sense, the #Fediverse and #ActivityPub are #openweb based on #4opens the data is in the commons - you do not have technical tools for stopping the #dotcons as the data in the end is in the open, unencrypted, in the database, in #RSS and in open flows.
They are fighting #closedweb on a #openweb platform - makes no sense at all, this incoherences is everywhere
I canāt stop the oligarch systems, but I can keep them out of my timeline and off my instance. I expect that all their claims about good intentions will eventually turn out to be lies. Extending any benefit of doubt to those people is just horribly naive.
This is true, though pushing #blocking as the solution is like putting head in sand. Our projects are #4opens thus anyone, including the #dotcons can be a part of the openweb in this itās a good thing they are moving back to this space.
Feel free to block them, but pushing this path as a solution would be both naive and self-defeating. We need to do better and build a healthy culture and a diverstay of tools, itās always a fight, hiding in a cave wins no wars.