We need to build "trust" in this space and the fedivers

A lot of energy is currently lost in people #BLOCKING each others attempts to help.

And yes, sometimes this blocking is needed, but a lot of the time it is not helping.

TRUST can help mediate this for a better (or to put it bluntly, ANY) outcome, this is urgent.

Ideas please.

2 Likes

Web of Reputation and Trust using different algorithms on top of a social graph.

We are building this into nostr. Perhaps thereā€™s room for collaboration.

1 Like

@hamishcampbell Can you be more specific?

If anyone is experiencing abuse, please flag or report and let the mod team handle it.

Thanks.

1 Like

phwwwwā€¦ the break down in ā€œtrustā€ is a problem, the #openweb is trust based, so this is a break-down of the #openweb itself, not a good path to be on, ideas please?

This is a path, but the Fediverse is people to people, thus built on trust relationships, here the is an understandable pushback on tech fixes to replace this human trust building, where in #nostor and #bluesky this tech fix path is more central to the projects.

The ā€œdebateā€ in fluff/spiky, we build (tech)bridges to cross this social diversity. A good outcome :slight_smile:

I 100% agree that we need to build trust. Iā€™ve written a white paper here on 1 possible way that we could tackle the trust problem.

Flagging, reporting, human moderation, and law enforcement can and should be used to address such as abuse, bullying or harassment. But unfortunately history has proven that these tools are not scalable, and inevitably lead to disagreements about what should/should not be allowed by mods. I believe that these tools are ill-equipped to deal with the mass spread of misinformation.

I agree with @melvincarvalho that we need a Web of Reputation. This is how our ancestors have dealt with trust for countless generations. ā€œMy word is my bond.ā€ People used to be disincentivized to lie because they knew that it would hurt their reputation which would result in real consequences like being denied jobs, becoming social outcasts or even exiled. But now communities are so large and so much of communication on the internet is anonymous that now people can spread lies with little to no repercussions.

I believe that if individuals are better informed about how much the community trusts something, then they can make better decisions about what to trust in.

@hamishcampbell I also agree that tech fixes are not a replacement for human trust building. This proposal is indeed a ā€œtech fixā€ however it is not intended to replace human trust building, but rather to better inform humans so that they can trust each other.

Briefly reading through nostrā€™s page on their ā€œTrust Rankā€ system, @melvincarvalho , it seems you all have similar ideas. :smiley:

1 Like

Nice white paper, we are building on an overlapping path here where most of what you talk about is implemented ā€œunspokenā€ in the project outline.

I am glad to see this convergence!

I can also contribute one more piece of ā€œprior artā€ from another context that might be useful: the Trust Establishment data model opensourced at DIF by a collaboration between TBD/Block and Indicio, which work, respectively, on Web5 tooling (including the personal data stores used by BlueSky) and DIDComm (another P2P architecture where all actors mutually authenticate with DPKI).

I think the trust/trustworthiness distinction is crucial here, because the software industry and engineers often use the word ā€œtrustā€ to mean something closer to trustworthiness (or connection-worthiness, inclusion-in-dataset-worthiness, trust-in-my-own-judgment, trust-in-signals-aggregated-about, etc). A good tactic might be using the word ā€œtrustā€ for subjective, interpersonal, collective, moderation, policy, etc decisions, and ā€œtrustworthinessā€ for quantifiable inputs TO those collective-or-individual human decisions. I.e., the system giving me a 0.72 rating on an actor would be trustworthiness, and me setting my thresholds to ā€œhide any actor under 0.75ā€ would be a trust action.

I also think variously explicit across all these prior arts is a range of serveriness-- architectures at the p2p end of the spectrum tend to keep most trust decisions out of the hands of ā€œserversā€ or aggregators, while ActivityPub is at the other end, using servers to cluster/aggregate both trust and trustworthiness decisions.

Another useful heuristic would be the olā€™ ABCs of social systems rubric - Actors, Behaviors, and Content. All three require different metrics, different authorities, and different liabilities! For instance, actors judging actors (purely p2p basis) can be aggregated up to a useful graph-crawling based web-approach, but content is usually jurisdictional and relative, in most cases work best done by authorities (civil society in partic), while behavior (incl spam policies and inauthentic behaviors, sold accounts, etc) requires a relatively long and complete history for a given actor to assess. Part of the elephant in the room is that while content standards have historically been instance-level and local in the hometown/masto tradition, ā€œbehaviorā€ just flat-out requires commercial scale to do well-- if todayā€™s fedi instances and p2p networks get flooded by inauthentic accounts, spambots, sleeper-accounts (that exist to trick behavior-detection which relies too heavily on duration/age of account), etc, weā€™ll be quickly turned into the ā€œfreeā€ tier of a freemium model, where commercial mega-services are refreshingly free of inauthentic behaviors and actors :sob:

In any case, I think our only hope is finding ways to cooperate across architectures, and pooling resources. Part of this would be p2p architectures at least passively being able to consume and parse trustworthiness signals and ā€œtrust reportsā€ (scores) produced by central authorities, an in opt-in/elective way at least. Here the Trust Establishment data model above might be illustrative. It nominally relies on being able to refer to all actors by a URI (a DID in all the examples, but an AP actor or public key or any other URI that translates to a public key would also work), and on VCs for the signing mechanics, but thatā€™s not the most opinionated tooling requirement.

Wow I have a lot to ponder!

@hamishcampbell Sorry Iā€™m not understanding what you mean by saying "implemented ā€˜unspokenā€™ "

The ABCā€™s is an interesting idea that I need to consider more. One thought I have is that content can also have some overlap with behavior as well. For example, what is a Mastodon thread. Well, itā€™s user generated content, but it is also behavior.

The longterm/short-term distinction I think is incredibly valuable. Part of the issue is that identities are too ephemeral online. Anytime someone wants to do something shady, they can simply make another account. These proposed trust systems have the potential to greatly mitigate these problems by incentivizing people to make longterm accounts that establish trust over time. If they have shorter term accounts they have less privileges because they have less trustworthiness. This also lowers the effectiveness of spam, and bot accounts. I suppose some of these techniques have already been used (like considering how long an account has been active, when they last logged in etc.) but I think it could be much more effective if there is a longterm history of other users who have ā€œvouchedā€ for them by publicly, accountably, declaring their trust in their content.

@bumblefudge Do trust reports have to be centrally produced? Could they not also be decentralized? Iā€™m not aware of a technical limitation that would make it impossible.

@bumblefudge Sorry what does DID, AP and VC mean?

Oh, hey, sorry for the acronyms.

AP = ActivityPub :smiley:

DPKI = Decentralized Public Key Infrastructure. I.e., any key-based system that involves self-issued/enduser-generated keys, which includes nostr, bluesky, secure scuttlebutt, ipns/libp2p, and lots of other P2P systems, as well as DID systems (a subset of DPKI systems that use a common export format called a DID)

DID = Decentralized Identifier, the aforementioned export format for key material that allows interop and thus verifiable portability of signed data. Itā€™s a young W3C standard not used too much in production systems except debatably by bluesky or even more debatably by cryptocurrency wallets.

VC = Verifiable Credentials, another young W3C standard that strives to make atomic blobs of information both verifiable (in the sense of signed) and portable (including RDF data via JSON-LD, or native JSON data, depending on your tooling/native context). Ironically the biggest production deployment of VCs outside of niche fields like supply chain isā€¦ LinkedIn employer verifications for Azure customers. Which, for as corporate as that sounds, isā€¦ actually a game-changing social-web technology, an anti-fraud primitive, and a small win for data portability.

My intention wasnā€™t to bring those adjacent technologies into scope for your design process, though, and I definitely donā€™t recommend spending too much time down those rabbitholes, they go very very far! I just thought if you were already passingly familiar with those fields of decentralized design it would help contextualize the trust list spec, which I hope is more directly relevant.

Oh, and as for ā€œbehaviorā€ in Camille FranƧoisā€™s taxonomy, I think a Mastodon thread qua data is content in the static sense (at rest, it is content on a hard disk, when displayed, read or migrated between servers, itā€™s content in transit). Who posted or boosted it and, more importantly, how they did so (on mobile, at 3pm, while walking around the city, in the middle of a doomscrolling session) is what that WG report defines as ā€œbehavior,ā€ and in the classic commercial social web context, itā€™s all the telemetry and account surveillance metadata, and the inferences (in the creepy adtech psychological sense of "behavioral advertising) that can be drawn from it. I dare say Mastodon hasnā€™t really concerned itself with behavior too much, because the human-scale moderation model catches most behavioral problems; itā€™s the rapid-scaling platforms and the less-moderated platforms that have to worry about detecting bad-faith accounts, false accounts, sold/stolen accounts, etc.

And I totally agree that DPKI/self-authenticated systems incur a much higher risk of negative network effects brought on by ephemeral accounts, sold/taken-over accounts, etc. But itā€™s always a cat-and-mouse game, adversarial in the machine-learning sense: if you hardcode a system to weigh accounts over 1 year higher, youā€™ve just raised the blackmarket price for a 1-yr-old account, which will be absorbed quickly if thereā€™s already a healthy market for those! Incentives are such that social systems just get socially engineered if you make it worth some clickfarmerā€™s while to do it. Thatā€™s why trust ratings should always be floats and never integers, as the ML piles up on both sides of the arms raceā€¦

Thanks,
__juan

1 Like

Very helpful thank you.

Agreed that it will always be a cat and mouse game. Whatever social network is created, it is inherently valuable to exploit it, and if you increase the difficulty of exploiting it, you have simply raised the blackmarket price.

Still, Iā€™m hopeful that some sort of trust history ledger will make it so cumbersome to fake that the majority of bad actors will pick lower hanging fruit and the human mods will have a more manageable workload.

1 Like

Had a few thoughts while reading over this post. Iā€™ll preface this by saying itā€™s not really in opposition to the attempt at a tech fix, but I think I have a different philosophical take on some of the issues you raised.

This is definitely a problem with monolithic social networks like Facebook and Twitter since they need to effectively scale globally. In a federated network, the cap on moderation scaling can serve as a cap on a nodeā€™s membership. Once a node has so many members that it canā€™t be effectively moderated any more, itā€™s time for a new node.

Additionally, disagreements with moderation decisions shouldnā€™t matter (in my opinion) in a federated network since when they arise, the dissenters should be able to easily move to a new node or start their own moderated in the way they choose.

Kind of goes back to my previous point: the problem of size is, as I see it, solved by federation because moderation scales with membership by virtue of nodes capping membership to a size that they can effectively moderate.

My current thinking around this is that the problem here is that the tech is not actually matching the real-world social web. That is, our online social group can far exceed Dunbarā€™s number and although that has pragmatic benefits (especially for content producers), my opinion is thatā€™s the root of all this trust problem. If weā€™re willing to build our network around people we know on a personal level, it seems like the trust problem solves itself since you exclude the really determind bad actors (criminals, malicious state actors, etc) since you donā€™t know them personally.

And finally to address the topic of a trust ranking system itself, I think a fundamental problem such a system would need to solve is personal identification. This is the basis for the bot, mob, and hacked account problems identified in the white paper: itā€™s difficult to build a system of personal trust when the identification of the person is slippery. This is what I believe is solved by personally knowing the people youā€™re networking with prior to networking with them, since solving that problem with tech becomes unnecessary, while acknowledging that this requirement severely restricts the size of oneā€™s social graph (and my assumption being that this is an acceptable tradeoff, which is probably the main point where many would disagree, especially those who rely on distributing content to a wide audience).

2 Likes

Good points @daniel

We have a lot of non ā€œnativeā€ energy pushing this core #4opens foundation down, they mostly do not mean to do ā€œharmā€ but they are, we need ideas and metaphors to bridge this mess making.

CARROT

Fantastic points @daniel .

I 100% agree that by limiting nodes to a manageable size, many, if not most, of moderation problems are solved. The sheer moderation load will be less, but in addition, individuals will self-moderate because there are more tangible repercussions if they behave badly. Just like small towns in the real world.

However, I think youā€™re also right that there is a tradeoff, just as there also is a tradeoff between small towns and large cities. Large cities empower people from very diverse backgrounds to collaborate and create amazing, wonderful things that they couldnā€™t possibly have created separately. And indeed, this is what makes the Internet so powerful: itā€™s one gigantic city. But the internetā€™s greatest strength is also itā€™s greatest weakness. With size comes, misinformation, mass abuse etc.

The truth is I donā€™t think that we can only pick one approach (large city or small town). And in fact, itā€™s my understanding that the Fediverse hasnā€™t picked only one approach. The Fediverse is essentially many small towns (nodes) that are connected to each other. But the vast connection of nodes is effectively a large city.

So while I agree that limiting the size of individual nodes largely solves the trust/moderation problem at the local node level, I disagree that it solves the problem at the inter-node level. Each node still interacts with an unfathomable amount of content from other nodes and still has to make moderation decisions. (Even not making a decision is in itself making a decision.)

Iā€™m not well versed on all the moderation tools available for node owners, but I doubt that the tools available are up to this task. Node moderators could potentially remove specific content, but thatā€™s not scalable. They could block specific nodes. That does seem much more scalable but thereā€™s a huge tradeoff. By severing the connection between two ā€œsmall townsā€ you are, by nature, limiting the spread of certain information. This could be good e.g. limiting hateful/slanderous content but it could also have downsides e.g. limiting innovation, positive viral content, and what makes the internet special.

I donā€™t mean to say that these are bad approaches. I just mean to say that each one has tradeoffs and these decisions should not be taken lightly. Itā€™s an extremely difficult task to be a moderator for a node and I donā€™t envy it. My impression is that most of these tools are like blunt hammers rather than precise scalpels.

It seems to me, that in order for the Fediverse to grow and thrive for generations to come, we need to empower moderators of nodes to make the best, well informed decisions possible as transparently as possible.[1] Furthermore, we need to empower individuals within nodes to feel heard, to build trust amongst each other, and to have agency themselves about what relationships and content they choose to engage with.

As an aside, I think we are all seeing a trend where the internet is transitioning out of monolithic social networks and into smaller communities. I think this is a fantastic trend and I hope that it continues. However, I think it is not without its downsides. Unfortunately, itā€™s led to another trend that I think weā€™re all seeing: tribalism. As the internet fractures into these smaller communities, there is also fighting between the communities. In other words, the trust problem is not only at the individual level, there is a massive trust problem at the community level as well. In some ways, smaller communities and nodes exacerbates this problem. Individuals flock to communities of other similar individuals (which isnā€™t inherently bad) but then those begin to silo themselves off from other communities, creating thought bubbles and they learn to hate and distrust other communities. I think this is one of the reasons why people get radicalized. They find a community that claims to accept them, yet teaches them that the other communities are evil. And because those communities are siloed off from neighboring communities, the individual is not empowered to investigate those claims themself and make up their own mind.

Sorry long post. My point is, I think what makes the Internet and the Fediverse so beautiful, is that it is one gigantic community made up of many smaller communities. Both approaches have benefits and weaknesses, and I think itā€™s vital that we use a mixture of both.


  1. If node mods are not empowered so, then it is a matter of time until they make drastic mistakes or simply miss things. But these mistakes can have huge legal/financial ramifications. For example, nodes might be violating CSAM laws if they donā€™t properly moderate that content. We have to empower node mods and owners to overcome these challenges or else node owners will simply decide that itā€™s too risky to run a node at all. The Fediverse doesnā€™t exist without nodes. ā†©ļøŽ

1 Like

Thanks for the reply. You bring up some additional good topics for discussion. Mainly I want to address this one, since I agree I wasnā€™t really addressing it my previous reply and itā€™s one that needs addressing:

There is no formal mechanism of inter-node governance, and Iā€™d say thatā€™s a feature (a means of formal distributed inter-node governance would probably wind up looking a lot like a DAO from the blockchain world). But youā€™re right to point out that potentially what Iā€™ll label a ā€œpromiscuousā€ node (in the sense it federates everything sent to it) is in great danger of federating illegal content.

The main way the fediverse deals with that right now is to share a blocklist of known bad nodes to defederate for various reasons. In my opinion this solution doesnā€™t scale since hypothetically bad actors can automate the creation of nodes if theyā€™re determined to spam the network with bad content.

The main example of successful federation we have, email, solved this problem with spam filters and thatā€™s sort of what the shared block-list is doing for the fediverse right now, but itā€™s my opinion thatā€™s not a particularly robust solution for amateur self-hosted instances against larger-scale abuse. The solution in my opinion is to instead invert the permission list to only federate with known good nodes. This will result in poorer discovery initially as each node only federates with a handful of peers, but significantly safer federation. Discovery should also improve over time as replies to unfederated accounts reveal new potential peers that the admin can decide to federate with. This is how Iā€™ve been running my personal node and it is some extra work (solvable with better ergonomics around managing the allow list) but I can rest much easier on the moderation front.

That gets to your next point:

This is an excellent point as well. I actually see this overall as a positive because I like keeping my social feed curated, but as mentioned discovery in my current setup is nearly non-existent and thatā€™s not great since Iā€™d like to expand the scope of content in my feed.

Well, thatā€™s only half true in practice for me. I actually have accounts on two nodes. Oneā€™s the curated one I mentioned and the otherā€™s an account on a public node that does federate widely and I discover accounts and nodes there that I then pull into my curated nodeā€™s list.

To your point about user agency, I think the previous paragraph extracts two areas for improvement:

  1. Itā€™d be interesting if you had some way of discovering content independently of your nodeā€™s moderation decisions. But I see those as effectively directly opposing interests (as long as a user is limited to a single account on a single node). Which gets to point 2ā€¦
  2. Itā€™d also be cool if it was as easy to have multiple accounts across multiple nodes as it is to have a single account on a single node. My thinking here is to put an additional application layer on top of what we currently have that acts as a sort of personal ā€œmaster accountā€ that only you as the user control and feeds content from across your accounts into a unified interface. I guess the golden age of Tweetdeck might be an apt comparison. Anyway, I think this would fit the constraints of both admins making autonomous peering decisions and users accessing content independently of those decisions.

Interesting. Thank you.

To be clear, what Iā€™m proposing in the white paper is not a form of ā€œgovernanceā€. If anything, itā€™s more analogous to a rating system like the MPAA movie rating system in that each individual sees the rating and makes up their own mind. However, unlike that system, this would be objective and verifiable. You could ā€œcheck the mathā€ and see exactly how the rating system was calculated. (This would also be vital because different nodes would have different inputs and therefore calculate different ratings).

However, what Iā€™m proposing is not incompatible with governance and I think that governance could be done in a DAO. But there are many other details that would need to be figured out.

Good that people are talking about the political side of development of the openweb
Lemmy ask you anything ā€“ The Fediverse Report this is trust building in action :slight_smile:

Pushing defederation from #meta is not wrong in sentiment, the #dotcons are vile and cons. But is wrong from a practical sense, the #Fediverse and #ActivityPub are #openweb based on #4opens the data is in the commons - you do not have technical tools for stopping the #dotcons as the data in the end is in the open, unencrypted, in the database, in #RSS and in open flows.

They are fighting #closedweb on a #openweb platform - makes no sense at all, this incoherences is everywhere

I canā€™t stop the oligarch systems, but I can keep them out of my timeline and off my instance. I expect that all their claims about good intentions will eventually turn out to be lies. Extending any benefit of doubt to those people is just horribly naive.

This is true, though pushing #blocking as the solution is like putting head in sand. Our projects are #4opens thus anyone, including the #dotcons can be a part of the openweb in this itā€™s a good thing they are moving back to this space.

Feel free to block them, but pushing this path as a solution would be both naive and self-defeating. We need to do better and build a healthy culture and a diverstay of tools, itā€™s always a fight, hiding in a cave wins no wars.