Intervention after 2021 storming of the United States Capitol - WIP

In remembrance of Stanley Greene

Open Wound

After the Capitol Storm I try to describe immediate interventions for the redaktor architecture.
A summary from 3 brainstorming-meeting logs …

In the last four years, Social Media Monopolies caused wounds. Open wounds.

  • How can we optimize truth in a Social Network?
  • How can we distribute governance and create a guarantor for internal diversity in the fediverse?

In German probably the nice untranslatable word “binnenplurale Gruppenstruktur” …

We want to explore small steps here which we can use immediately and replace by better suited technical solutions like Spritely time by time.


Glossary

Mint
mints create digital assets. Access to an asset’s mint lets you create more digital assets of that kind. You can then store new assets in a payment or a purse.

Fungible
A fungible asset is one where all exemplars of the asset are interchangeable. For example, if you have 100 one dollar bills and need to pay someone five dollars, it does not matter which five one dollar bills you use.

Non-fungible
A non-fungible asset is one where each incidence of the asset has unique individual properties and is not interchangeable with another incidence. For example, if your asset is show tickets, it matters to the buyer what the date and time of the show is, which row the seat is in, and where in the row the seat is (and likely other factors as well). You can’t just give them any ticket in your supply, as they are not interchangeable (and may have different prices).

citizen
any user with a right to elect the following administrations:

instance assembly
[local moderators]
The group of elected moderators per instance, 11 or 21 depending on the size.

senate
[admins]
The group consisting of the owner plus 3 or 4 elected admins.
An admin has the same rights than a
local moderator with an additional right to execute block decisions


Trying to describe a governance structure similar to the Free and Hanseatic City of Hamburg.

For governance assumptions or the user/moderation ratio we assume governance figures from Hamburg, Germany where

0,018% Bezirksversammlung [Direct municipal district assembly]
0,007% Bürgerschaft [City assembly acting as German Federal Land]
0,003% other/higher interest
=
0,028%

The ratios from Mehrmandatswahlkreise is applied as well based on instance size.

We want to make sure that 2 elected administrations exist,

  • instance assembly: for governance per instance
  • fediverse plenum: for important decisions amongst instances

We describe the help of AI which brings evil biases in automated systems.
See https://algorithmwatch.org or
Introduction - News Values | AP

The use of automated systems MUST

  • have proven privacy
  • never take decisions about content

Instead it SHOULD

  • help moderators at the minimum level, like e.g. sorting of assumed comment toxity
  • help users unobstrusive (“think before you post”)
    Any final decision about involved AI ratings MUST
  • be done by a human being

In regards to the Perspective API two steps would be done:
Either train the model properly or at least replace any terms expressing gender or provenance
like “Black man”, “Polish woman”, “disabled homosexual man”, “Arab”
by “human”

Take special care for well known tricks like script spoofing, typosquatting, light misspellings or strategically inserted punctuation …


We want to work with you on

  • reacting to urgent incidents in the fediverse
  • reducing harassment overall
  • improving truth and factual use but encouraging debate and securing press freedom

Please note: That in the system of tokens no money must be involved, everyone could adopt this model for free.

For redaktor we describe a pay as much as you want model …


TODO :wink:
Regarding voting and electing unfortunately some problems exist, e.g.

  • Instance needs to prevent the creation of multiple accounts per human in order to be fair
    for instance assembly per instance but for fedi plenum per all …

“TCP/IP was for computers; we now need TCP/ID for people.”
The United Transnational Republics


We try now to describe a conveniant onboarding experience which is at least guided by humans to reach a high engangement rate.

Let’s compare total/active ratios amongst

popular (large) mastodon instances: 4-7%
friend.camp: 59%

When a redaktor instance is created, the following happens:

A decision needs to be made if the instance is a local community and a minimum of 3 topics needs to be defined.

A moderator who has diverse cultural awareness as well as knowledge of recent news helps with onboarding.

Based on e.g. recent news and the local or topic, the moderator might give good advice like
“this is a knowledge network based on facts” if an instance is e.g. joining with

2021-01-05 #war #maga #tunnels

or

2021-01-06 {
  "@context": "https://www.w3.org/ns/activitystreams",
  "type": "Place",
  "name": "Capitol Hill",
  "latitude": 38.88681633464774,
  "longitude": -77.00049424264478,
  "radius": 10,
  "units": "miles"
}

Downsides: We need to investigate the role of press freedom in the real world and check what special needs
apply for e.g. journalists, lawyers etc.
Benefit: The knowledge of moderators can be shared to generate warnings and awareness.

The user chooses a wallet or gets Agoric Wallet, after everything is installed, the redaktor app contacts the wallet.
The owner of the new instance enables redaktor in their wallet.

The app begins then with the onboarding process just like with every user, involving two important steps:

1) Introduction on how to verify facts

https://educheckmap.factcheckingday.com/dist/index.html#/projects
https://twitter.com/quiztime

  • fact check mechanisms in the UI
  • local fact check tips
  • encouraging formats like weekly quizzes for verification or important fact checks

2) Message that trolls are present but outnumbered

  • A link to what is considered trolling in the community or platform with minimal TOS
  • A warning that users will sometimes see trolling, but that non-trolls outnumber the trolls
  • Instructions for reporting mechanisms in the UI

One of the following contracts is offered to the instance owner

This contract mints non-fungible

  • 40.000 ‘citizen’ tokens
  • 7 ‘instance assembly’ tokens
  • 3 ‘fedi plenum’ tokens
  • 4 ‘admin’ tokens

and creates a selling contract instance to sell the tokens in exchange for some sort of money in the range between 1$ - 1.000$

[pay as much as you want for onboarding/development]


or if the instance owner is a news organisation or NGO [which is to be defined], the following contract for a trusted instance supporting e.g.

https://www.ap.org/about/news-values-and-principles/downloads/ap-news-values-and-principles.pdf :


This contract mints non-fungible

  • 80.000 ‘citizen’ tokens
  • 14 ‘instance assembly’ tokens
  • 5 ‘fedi plenum’ tokens
  • 5 ‘admin’ tokens

and creates a selling contract instance to sell the tokens in exchange for some sort of money in the range between 100$ - 1.000$ [pay as much as you want for onboarding/development, includes support for setting up a uberspace or equivalent server]


The non-fungible tokens have just an id for the instance + an ordered Integer.

redaktor as a 7 people foundation in funding is a mint for just assuring any instance is not larger than 40.000 users with a minimum of 11 moderators where 4 get admin rights.
The redaktor foundation will consist of the redaktor board which is only responsible for the technical development of the software. Appart from the above mint, the foundation MUST NOT take any content moderation actions.
Also the foundation MUST NOT do any decisions with regards to content except from the own instance of the foundation.


The onboarding of users includes the 2 above described onboarding steps and could also be monitored by a moderator.
This would avoid official Daesh channels to join e.g. mastodon.social (alone by researching the description) …

The first posts of every user (t.b.d.) could now be selected randomly and receive a toxic index from the Perspective API with before described enhancements like

0 = < 5%
1 = 5% - 20%
2 = 20% - 60%
3 = > 60% or SEVERE_TOXICITY

A toxic index of 0 reaches the network immediately

Any toxic index > 0 can be used for sorting by moderators

Toxic index 2 triggers a “Think before you post pattern”

Preliminary Flagging Before Posting – Prosocial Design Network

TODO: research well in terms of privacy

Toxic index 3 increases the visual obstrusiveness based on the percentage …

For either

Toxic index 2 with any flag like e.g. IDENTITY_ATTACK, INSULT, SEXUALLY_EXPLICIT
OR

Toxic Index 3

  • If applies to a randomly selected post by new users they will be sent to a moderator for review before publishing, then
  • Can visually encourage voting/reporting or trigger informations for comments receiver (see gamified factchecks and reporting)

Furthermore we could apply the

Headline Rating Interstitial – Prosocial Design Network

and invoke it with a ‘factual index’ for randomly selected first posts of every user.
After a certain count of posts is reached, the level of fake news could trigger a warning.

An anonymous trust index per instance could be done by these kinds of voting.

Let us establish a vocabulary for Terms of Service and Code of Conduct.

Then moderators can easily perform checks for various TOS …

Last not least:

gamified factchecks and reporting

Let our UIs actively encourage checking of facts and teach users about consequences, for example

  • it is best that you should never respond to anything that the bully said
  • you are actually deterring these “cyber attacks” from happening

The same for a nice and satisfying block feedback …

AP’s News Values and Principles say, “Staffers MUST notify supervisory editors as soon as possible of errors or potential errors, whether in their work or that of a colleague.”


see also:

and via Arnold