Mastodon

It seems to be unwanted that I criticise the “elephant in the room” directly.
But please allow me at least to share an article.
Hoping for a productive debate!

The number of “active users” is just some math …

I don’t agree with a lot of this article, and I’ve contributed to a lot of fediverse projects now. Going to break it down point by point…


The problem

This section starts off by outlining an alleged history about GNU Social, that it was a “pile of shit trying to imitate Twitter and mostly filled with people banned by Twitter for GamerGate”. This is not really true, and I know this is not really true, because I have been using the fediverse since the 2000s.

While it is true that some GamerGate users discovered GNU Social and decided to start their own instances, GNU Social was primarily used by Indymedia contributors (indy.im) and FOSS developers (identi.ca). The original author of GNU Social went on to start Pump.io.

It then builds on this false allegation (that fedi was originally “alt-right people thrown off Twitter”) and the author questions why “tumblr/twitter leftists” adopted Mastodon (answer: good marketing). The author speculates that the reason is alleged safety features and that these leftist twitter/tumblr users were also being banned by twitter/tumblr. That is not how I remember the Mastodon launch, basically Mastodon capitalized on stupid decisions that those silos made with clever marketing.

Fedi is the least safe place around

The author argues that because the fediverse has a low bar to entry, that it is less safe. To some extent, it is true that a low bar to entry means a low bar for net-abuse, just like e-mail.

There are tons of far-right instances “littering the place” and this is “the worst kept secret amongst Mastodon admins.” But it is not a problem that far-right instances exist, and Mastodon has never advertised itself or the fediverse at large as being actually “free from nazis,” this was however a misconception adopted by some users who did not understand the nature of federated networks.

What Mastodon and other software actually promise is that you can curate your own experience to be nazi-free, and that is basically true: you can remove nazis from your timelines/inboxes/whatever.

To illustrate alleged problems with the fediverse, the author then uses Pawoo as an example.

Pawoo is a Japanese instance originally operated by Pixiv, but it was sold in 2019 to Russel Co., a Japanese holding company which specializes in payment products and adult entertainment products. The author incorrectly states that Pawoo is operated by Pixiv.

The author alleges that Japanese nationalist content is rampant on Pawoo, but does not provide any actual evidence to back this claim up – guess you will have to take her word for it, she also alleges the instance staff do not care about this alleged Japanese nationalist content (which, again, is asserted without actual presented evidence).

The author then observes that it is ironic that the largest instance is operated by a commercial entity (this observation of course being entirely unrelated to safety). They then say that Pawoo allows lolicon (and it’s not illegal in Japan and not legally tested in the US), this claim is factually true.

The author then alleges that Pawoo “sits there, under the radar” because “they don’t speak English,” and “many instances blocked them a long time ago.” These statements are, of course, not related to safety.

They then allege that there are “many instances” which “spew things that are far worse” and link to their list of blocked instances on their Mastodon instance (half of which have factually inaccurate citations for their block reasons likely cargo culted from other instances).

Mastodon and Safety (the actual facts)

These days, both Mastodon and Pleroma are reasonably safe. The main safety hazard is followers-only mode, which is problematic because followers-only replies get sent to your audience instead of the parent author’s audience. This can be fixed by using audience and collection expansion, which is what my project does for anything addressed to a collection.

A secondary safety hazard, which is arguably also a violation of the AP spec ("6.9. Servers SHOULD NOT deliver Block Activities to their object") is the federation of Block activities instead of their side effects. By federating the Block activity, you notify the person stalking you that they need to jump accounts. If side effects are federated instead (Reject { Follow } activities), then the stalker will not know about the Block while still having effectively the same semantics. Pleroma can be configured to do either and federates Block activities by default because that is what the Mastodon-centric side of the fediverse expects.

Both of these issues should be improved upon, but it is not nearly as bad a situation as it was a few years ago when AP first went live in the fediverse.

The faux-woke crowd is making fedi less safe

This section begins with the author asserting that they will be flamed and called a racist. It continues to describe the author’s alleged ethnic background, and then links to an instance admin describing how they intend to defederate another set of instance admins, one of which alleges that there are very few users on their instance and thus could not possibly be a source of the alleged problem.

It alleges that the fediverse has a “race-based trolling problem.” It then asserts that these people are “using white guilt to troll the ever-loving fuck out of people.” Unfortunately, this assertion is not backed by the instance admin post that they were talking about, which actually asserted that some marginalized people were harassing Black people off the fediverse. I do not know the veracity of those claims, and I only mention them as they are unrelated to the author’s thesis.

They then assert that this alleged problem is causing minorities to be shed from the fediverse. I don’t know if this is true or not, but the link they use as a reference does not describe the same thesis.

It then cites a specific example featuring a jewish non-binary person, but does not actually provide any details. It discusses how people were reluctant to block the involved instances (if only this were more true in general – the world would be a better place if people used appropriate tools rather than sledgehammers to solve all problems), asserts that minorities can also be racist (I don’t think anyone disputed that) and asserts that skin color does not imbue trolling rights. They then say that you should have your admin ban trolls and if they refuse that you should block trolls yourself. In other words, the entire paragraph does not contain useful information, or really a single coherent thesis at all.

It then asserts that these incidents cheapen actual racial justice (this is true), and asserts that these alleged participants are not actually interested in racial justice, but of course without knowing who the author is talking about, I can’t make a determination of whether or not that is likely to be true. It then asserts that this is a cancer driving people back to Twitter (the decline of MAUs on Mastodon instances do speak to this being a possibility).

The cancel crowd

The author continues to wax poetic about the previous topic, and alleges that the problematic behavior (“canceldon”) originates from the Twitter and Tumblr silos. This part is likely true.

The author then alleges that cancel culture is making fedi less safe for everyone (I’ve yet to see a discourse that wasn’t solved with thread muting), and are no better than the “crowd that used to be on Fedi.”

The question is which crowd? The FOSS developers and Indymedia people, or the Twitter expatriates? If Mastodon users are also Twitter expatriates, wouldn’t that make them the same crowd? The author does not clarify, and her thesis excludes the FOSS developers and Indymedia people.

The people who write the software are fucking dickheads

The author complains about Gargron’s attitude, and alleges that “his reputation precedes him.” The author then alleges that Gargron once asked if he could dead name a person. Gargron asserts this allegation to be false. At any rate, no public interactions exist that can prove the assertion to be true. The author then alleges that the individual described Gargron as a “shitty liberal,” but again, this is heresay, and heresay is rightly not admissable as evidence.

The author then complains about Pleroma’s relationship with Alex Gleason. It is indeed true that Pleroma has a working relationship with Alex Gleason. Whether that is good or bad, is of course a subjective opinion.

The author asserts that Mastodon and Pleroma are the only relevant fediverse software (PixelFed has more MAUs than Pleroma). The author then asserts that ActivityPub is a horrible protocol and that Mastodon has butchered it so much. I cover that issue already above, needless to say, Mastodon is a mostly compliant AP server.

The author further asserts that it is impossible to make an interoperable implementation without man-months of work (I implemented the core of Jejune and had it federating within 2 weeks of coding, and that was mostly in my spare time). The author then asserts to reimplement Mastodon from the ground up would be a nightmare (Jejune’s Mastodon API emulation package was written in a day) and then asserts that a Mastodon fork would be up against a huge pile of technical debt (a problem with rails applications in general).

The author then asserts that you could fork Pleroma, but then you’d have to know Elixir, which is a “language few people know.” (Elixir is very easy to learn.)

The author then refers to the Florence debacle and then asserts they wanted to fix the problem but then gave up, and then asserts the community is mean to devs (after calling all the devs assholes, the irony certainly isn’t lost!).

“Mastodon” never developed a culture of its own

The author asserts that Mastodon never developed a culture (despite Mastodon users having their own memes), and that as a result, Mastodon is not compelling. They then assert that fediverse users come and go in waves (this is true).

It then suggests that Mastodon and Pleroma stop trying to clone Twitter, and then asserts fedi will never develop a distinct culture before it’s “too late.”

The tools to protect users are rotten fruit in an opaque bag

The author correctly asserts that Pleroma’s MRF is better than Mastodon’s filtering tools in terms of capability (I invented the MRF so I may be biased here), but then complains that Mastodon users “cancel Pleroma users on sight” (there is indeed a tribalism problem).

The author then goes on to discuss the lack of security hardening in the install instructions, and asserts that Mastodon cannot be configured to support allowlists (it can, in secure mode). They then discuss dzuk’s “blockchain” project, which has been dead for 2 years.

They then assert that the lack of shared blocklist support is a problem (I agree, even e-mail has DNSBLs for this, but we did not pursue that with Pleroma for political reasons).

They then rightly observe that Mastodon CWs are poorly designed and that a system based on abstract tagging would be better.

Cancelling people for using different software is uncool

The author complains about the Mastodon v. Pleroma infighting. Nothing useful is discussed in this section, but the author alleges that Pleroma is run by “dickheads who care more about contributions and donations” despite the fact that it is not possible to donate money to Pleroma as a end-user, so I don’t know where they are going with this.

The author then continues on by complaining about other software being written by “bad people” citing the JavaScript creator’s support of Prop 8 as an example (the fact that he is the JavaScript creator being already problematic is lost on the author however).

The author then asserts that the Pleroma userbase is somehow more toxic than the Mastodon userbase, claiming the alleged (non-existent) Twitter GamerGate Nazi (sounds like a good name for a punk rock band) userbase of GNU social as the reason. (Incidentally, I would not want the author as my defense attorney if this is their idea of a defense.)

The author concludes this section by observing that this is “whataboutism” and then says that plenty of Pleroma users are “not shitheads” (despite asserting Pleroma having a more toxic userbase just a few paragraphs ago). In other words, this section is really the same FUD that is discussed about Pleroma on a regular basis, but then finished off with “but some BIPOC people use it, so it’s not really so bad!”

The mindset is toxic

The author begins this section by stating that political conversation amongst alleged radical leftists is tiring. They then complain yet again about the CW system and demand everyone CW their political posts (despite the CW system being badly designed – an actual effective way to fix that would be to boycott the broken CW system, but I digress).

Fucking with Wesley Crusher wasn’t really that cool

The author asserts that the fediverse ran a B-list celebrity off the fediverse (and surprise! Wil isn’t the only one!), and that this was totally not cool (I mean, I’m not disagreeing).

Outside of “harassment isn’t cool”, which is a take that is so obvious it should be subzero, there isn’t anything of note in this section.

Can it be fixed?

The author asserts that they don’t know if fedi can be fixed (and already declared earlier that they themselves aren’t bothering to try to fix it).

They then assert that the fediverse will continue to crumble and that the writing is on the wall, but that they would like to not see it crumble (despite declaring earlier that they themselves, again, aren’t bothering to try to fix it).

Finally they appeal to emotion and ask the reader to not force them to use Twitter. The author of course, could simply not use Twitter if they don’t want to use Twitter.

Nothing will ever be a panacea

The author admits that social networking is broken and that having an “online safe space” is a pipe dream. (Incidentally, having true “offline safe spaces” is also a difficult task to accomplish.)

They then assert that it “doesn’t have to be this bad” and that they will continue to use the fediverse anyway.


Needless to say this article is filled with logical fallacies and less than fully accurate portrayals of information. I don’t think it is useful as a case study in anything other than whining. I view the time I have just spent reading and analyzing this blog post to have been a waste. Perhaps the person who wants to charge $0.02 for all interactions has a point.

1 Like

Finally found some time to differentiate
I do not agree too to the words about the protocols and agree to kaniini words re. GNU Social
– and about

GNU Social was primarily used by Indymedia contributors (indy.im)

Are you aware of the Indymedia ActivityPub Reboot.
Happy to connect to the people!

pawoo

Wanted to attach some evidence in toots on https://mnm.social but it is one of many
instances which is no more. Can’t find, sorry.

Mastodon and Safety
This can be fixed by using audience and collection expansion, which is what my project does for anything addressed to a collection.

Yes!

A secondary safety hazard, which is arguably also a violation of the AP spec

Yes!

But, sorry, I think it is a bad situation there.
I reported official Daesh channels (users called “… Agency”) on mastodon.social and it took
1 year until they were deleted.
I am on mastodon.social cause research.
Example:
Eugen tooted public “What is wrong with you”, the next minute he blocked and banned me.
But I got no notice at all.
Before I was harrassed by alt-rights.
Different people complained to Eugen and Mods about the shadowban of me.
My lawyers (public broadcaster ZDF) can prove that my posted content applied to TOS if you want.
Content Moderation is totally intransparent for me.

This is why my last 2 weeks in EU parliament, at German Bundestag and all relevant EU Digital events (it was the flagship week) were fulltime to demand an EU funded fediverse of trust.
MdB Anke Domscheit-Berg described it in front of our parliament

The faux-woke crowd is making fedi less safe

I can understand the author. How should you provide sources here?
For me as journalist the most important thing was always to protect informers and minorites.
Let’s not exhibit them.

The cancel crowd

Well, what worries me: If instances have 50% problematic users and 50% ok, should we really block it?
This is what I want to improve in Talks today between EU Open Source Summit.

The people who write the software are fucking dickheads

While I would not use this wording, see above.


“Mastodon” never developed a culture of its own

agreed.
My general advice is to do exactly the opposite of hypercapitalist surveillance monopolies.
[compare my Talk at ActivityPub Conf 2020]
It might be “too late” …
But we the users can change this. E.g. by new content formats or weekly hashtags.
I describe this in my next post here.

The tools to protect

Maybe too less knowledge here.
But I agree to every single word about Content Warnings in the article.
It destroys mastodon for me and as a reportage photojournalist I simply can’t use it.

Perhaps the person who wants to charge $0.02 for all interactions has a point.

These are multiple persons.
Chris Lemmer Webber describes the stamp idea in the spritely paper “towards a network of consent”,
a user posted the idea here recently,
I described it before too,
German NGO digitalcourage had the idea and a succesful email server,
Multiple users demand it in the Fediverse …

The behavior the author complains about occurred publicly, and so it is possible to link to example posts. If somebody is yelling into a crowd, this is not speech that needs to be protected under condition of anonymity.

I think instance blocking (and the consideration of instances as distinct units) has been problematic.

A real life analogy – at least in America – upper class white families frequently advise their children not to talk to “kids who are from bad neighborhoods.” The conversation at large surrounding instance blocking is a similar argument and also a guilt-by-association argument, so I see instance blocking as a digital equivalent to this classist/racist stance.

My goal with creating MRF was to explore more equitable alternatives.

To be clear, I don’t think the Mastodon CW system is well-designed. I have called for content filtering to be done based on hashtags (with the recipient choosing her own preferences for what should be filtered) for years. I continue to believe that is the right design for this, and I intend for Jejune to work that way.

finally a reaction on mastodon :wink:

I don’t think this article is very helpful, as @kaniini said, it makes a lot of claims that are not backed up by any verifiable evidence (or even, in many cases, unverifiable examples).

I agree that Content Warnings are suboptimal, but one of the claimed issues is that they require knowing the language of the CW, but that issue would be exactly the same with tag-based filtering, unless the tags are pre-defined and widely translated. Tagging has its own issues, as shown by the exceptionally low usage of hashtags in Mastodon, for instance.

I agree, followers-only replies are a mess for a few reasons, one of which is that the audience of replies switches back and forth between participant’s followers. However, addressing the original followers may be counter-intuitive and risky too, as the person replying might not know who the followers of the person they reply to are. This is something we tried to tackle in Mastodon, and there are a few opened PRs for that, but we still haven’t found a good enough solution to these design issues.

Different users have different expectations regarding blocks. I think federating Block activities can make sense, but we (Mastodon) do a bad job at explaining the consequences. I tried addressing that in https://github.com/tootsuite/mastodon/pull/11562 but as you can see the PR has stalled.

Please do not misrepresent what happened again. You replied to someone’s cute photo of shark plushies dining with a “;)” and a photo of a shark’s severed head with organs spewing out, mentioning both the original author and Gargron. You did not use a CW, you did not mark the photo as sensitive. It was disturbing and uncalled for regardless of your intent. The fact that this photo was from your earlier photojournalistic work on climate refugees (which you did not mention at the time) is irrelevant here.

Gargron personally blocked you, and the mastodon.social moderation team independently limited your account because of multiple reports they had received for your account (I’m not in the mastodon.social moderation team and I’m not privy to the details).

It is true that it would have been best for you to be notified of the moderation team’s decision, though, and I do not know why it did not happen.

You can use it. But as you made it apparent in your replies following-up the blahaj incident, you purposefully did not use a CW to force people to see your photo. As such, I doubt you’d use other ways of tagging your content appropriately for people to be able to filter it out.

1 Like

In fairness, if somebody @ me with that, I would block them too. It is nice to have more understanding of the background here.

Obviously somebody sharing a cute photo of their shark plushies is not wanting to see a real shark which has been disemboweled. At the very least it would kill the vibe, you know?

Hm, yeah interesting.
There will be upcoming studies about Content Moderation in mastodon.social – looking forward.

One of many examples is the toot where someone says about a black woman who wrote another article about mastodon:
“his attitude sucks”, “to hold his dick”

Of course I directly reported the sexist comment with offensive wording.
Nobody of mastodon reacted.
So, I just take your comment with a pinch of salt.

Everyone: Please also help the US research of Purdue University with

Please note the definition of “gore” in Associated Press, the Oxford Dictionary and other authorative sources.
lol