Email or username:

Password:

Forgot your password?
Jerry Bell :verified_paw: :donor: :verified_dragon: :rebelverified:​

I've been participating in the fediverse for about 8.5 years now, and have run infosec.exchange as well as a growing number of other fediverse services for about 7.5 of those years. While I am generally not the target of harassment, as an instance administrator and moderator, I've had to deal with a very, very large amount of it. Most commonly that harassment is racism, but to be honest we get the full spectrum of bigotry here in different proportions at different times. I am writing this because I'm tired of watching the cycle repeat itself, I'm tired of watching good people get harassed, and I'm tired of the same trove of responses that inevitably follows. If you're just in it to be mad, I recommend chalking this up to "just another white guy's opinion" and move on to your next read.

The situation nearly always plays out like this:

A black person posts something that gets attention. The post and/or person's account clearly designates them as being black.

A horrific torrent of vile racist responses ensues.

The victim expresses frustration with the amount of harrassment they receive on Mastodon/the Fediverse, often pointing out that they never had such a problem on the big, toxic commercial social media platforms. There is usually a demand for Mastodon to "fix the racism problem".

A small army of "helpful" fedi-experts jumps in with replies to point out how Mastodon provides all the tools one needs to block bad actors.

Now, more exasperated, the victim exclaims that it's not their job to keep racists in check - this was (usually) cited as a central reason for joining the fediverse in the first place!

About this time, the sea lions show up in replies to the victim, accusing them of embracing the victim role, trying to cause racial drama, and so on. After all, these sea lions are just asking questions since they don't see anything of what the victim is complaining about anywhere on the fediverse.

Lots of well-meaning white folk usually turn up about this time to shout down the seal lions and encouraging people to believe the victim.

Then time passes... People forget... A few months later, the entire cycle repeats with a new victim.

Let me say that the fediverse has a both a bigotry problem that tracks with what exists in society at large as well as a troll problem. The trolls will manifest themselves as racist when the opportunity presents itself, anti-trans, anti-gay, anti-women, anti-furry, and whatever else suits their fancy at the time. The trolls coordinate, cooperate, and feed off each other.

What has emerged, in my view, on the fediverse is a concentration of trolls onto a certain subset of instances. Most instances do not tolerate trolls, and with some notable exceptions, trolls don't even bother joining "normal" instances any longer. There is no central authority that can prevent trolls from spinning up fediverse software of their own servers using their own domains names and doing their thing on the fringes. On centralized social media, people can be ejected, suspended, banned, and unless they keep trying to make new accounts, that is the end of it.

The tools for preventing harassment on the fediverse are quite limited, and the specifics vary between type of software - for example, some software like Pleroma/Akkoma, lets administrators filter out certain words, while Mastodon, which is what the vast majority of the fediverse uses, allows both instance administrators and users to block accounts and block entire domains, along with some things in the middle like "muting" and "limiting". These are blunt instruments.

To some extent, the concentration of trolls works in the favor of instance administrators. We can block a few dozen/hundred domains and solve 98% of the problem. There have been some solutions implemented, such as block lists for "problematic" instances that people can use, however many times those block lists become polluted with the politics of the maintainers, or at least that is the perception among some administrators. Other administrators come into this with a view that people should be free to connect with whomever on the fediverse and delegate the responsibility for deciding who and who not to block to the user.

For this and many other reasons, we find ourselves with a very unevenly federated network of instances.

Wit this in mind, if we take a big step back and look at the cycle of harassment I described from above, it looks like this:

A black person joins an instance that does not block m/any of the troll instances.

That black person makes a post that gets some traction.

Trolls on some of the problematic instances see the post, since they are not blocked by the victim's instance, and begin sending extremely offensive and harassing replies. A horrific torrent of vile racist responses ensues.

The victim expresses frustration with the amount of harassment they receive on Mastodon/the Fediverse, often pointing out that they never had such a problem on the big, toxic commercial social media platforms. There is usually a demand for Mastodon to "fix the racism problem".

Cue the sea lions. The sea lions are almost never on the same instance as the victim. And they are almost always on an instance that blocks those troll instances I mentioned earlier. As a result, the sea lions do not see the harassment. All they see is what they perceive to be someone trying to stir up trouble.

...and so on.

A major factor in your experience on the fediverse has to do with the instance you sign up to. Despite what the folks on /r/mastodon will tell you, you won't get the same experience on every instance. Some instances are much better keeping the garden weeded than others. If a person signs up to an instance that is not proactive about blocking trolls, they will almost certainly be exposed to the wrath of trolls. Is that the Mastodon developers' fault for not figuring out a way to more effectively block trolls through their software? Is it the instance administrator's fault for not blocking troll instances/troll accounts? Is it the victim's fault for joining an instance that doesn't block troll instances/troll accounts?

I think the ambiguity here is why we continue to see the problem repeat itself over and over - there is no obvious owner nor solution to the problem. At every step, things are working as designed. The Mastodon software allows people to participate in a federated network and gives both administrators and users tools to control and moderate who they interact with. Administrators are empowered to run their instances as they see fit, with rules of their choosing. Users can join any instance they choose. We collectively shake our fists at the sky, tacitly blame the victim, and go about our days again.

It's quite maddening to watch it happen. The fediverse prides itself as a much more civilized social media experience, providing all manner of control to the user and instance administrators, yet here we are once again wrapping up the "shaking our fist at the sky and tacitly blaming the victim" stage in this most recent episode, having learned nothing and solved nothing.

125 comments
Tanawts

@jerry I am frequently reminded how fortunate I am that I hitched my horse to this specific corner of the fediverse

Wendy Nather

@jerry I greatly appreciate all your efforts at vile pest control, Jerry. I don’t believe anyone has the right to harass others, and blocking early and often at the most effective level is the only way to protect those who are most likely to be harassed. Keep up the good work. ✊🏽

Doug

@jerry
I agree with everything you observe, the cycle is both predictable and all too frequent.

What concerns me the most, and I will pick on Mastodon here as the predominent platform, the devs do not sufficiently consider safety as a priority, nor seemingly as a factor in their design decisions. It feels like it would take a fork to properly implement safety mechanisms to counter the apparent race to "help build engagement".

Michael Stanclift

@doug @jerry I'm going to stand up for the devs here and say that they absolutely do factor in these things, just not always in the ways that are most apparent. There are a number of features that don't get added (at least as quickly as folks demand) specifically because of their impact on user privacy, safety, security, etc. (Quote toots, for example.)

There's a triad of engagement, safety, and accessibility that has to be factored into everything. Then how those features are maintained going forward.

@doug @jerry I'm going to stand up for the devs here and say that they absolutely do factor in these things, just not always in the ways that are most apparent. There are a number of features that don't get added (at least as quickly as folks demand) specifically because of their impact on user privacy, safety, security, etc. (Quote toots, for example.)

Jerry Bell :verified_paw: :donor: :verified_dragon: :rebelverified:​

@vmstan @doug Additionally, I am not sure what additional safety mechanisms are missing, to be honest. Perhaps making block lists more frictionless? Allowing admins to block certain words? (Which btw, would cause it's own set of backlash for flitering out legitimate use of some words)...

Renaud Chaput

@jerry word-based filtering has many many issues. As server blocklists do. Before having tools that reinforce this, we want those tools to not be invisible to users and provide some auditing. Not doing so, in our experience, creates very bad experiences for users.
Add the fact that being a federated network makes most of the things much more difficult to implement properly.
@vmstan @doug

Renaud Chaput

@jerry and this is also why we introduced the severed relationship mechanism, as well as the (still needikg improvements) filtered notification system. Now that we have those, which allow more auditing and decision visibility, we will need to able to add more tools, like blocklist syncing.
@vmstan @doug

adamrice

@jerry @vmstan @doug Something that might help would be allowing individuals to subscribe to curated block lists, not just admins. Not sure how disruptive that would be to the fediverse.

Wilbur the Red 🇵🇸

@jerry Took me 3 cycles to realise what was happening (specially because of the hidden replies and blocked people). Thanks for sharing.

J4YC33 ❌

@jerry I think the problem here isn't technical, it's social. In fact, I would posit that the problem *is* that everything is technically working as corrected and human beings are... well... human beings.

The tools for moderation exist, and can be used to effect (as you said), but all technical solutions have work-arounds. I mean hell, I remember when stateful firewalls were the next big thing that would secure is forever (spoiler: they did not). The issue becomes the social expectation that no one is responsible for anything, but everyone else is.

We're seeing this also in greater discourse: Who's responsibility is it to resolve economic issues? The individuals struggling to survive? The organizations struggling against each other in Elephant fights? The governments with their own failing economies?
The answer starts to coalesce at "Everyone... everyone has to do their part or this just keeps getting worse." (Consider this same concept extrapolated to Climate Change, Global wars, etc.)

I think that's where we are with the fediverse. We're seeing interactions of cultures and subcultures that are just going to happen wherever we get human beings together.

I think we're all a little bit responsible, and we all need to remember the human.

I hope that was relevant and helpful.

@jerry I think the problem here isn't technical, it's social. In fact, I would posit that the problem *is* that everything is technically working as corrected and human beings are... well... human beings.

The tools for moderation exist, and can be used to effect (as you said), but all technical solutions have work-arounds. I mean hell, I remember when stateful firewalls were the next big thing that would secure is forever (spoiler: they did not). The issue becomes the social expectation that no one...

Jerry Bell :verified_paw: :donor: :verified_dragon: :rebelverified:​

NB: I am far, far from perfect, both as a person and as a moderator/administrator. I love this place we've built, and it breaks my heart to see what people go through here.

Sophie Schmieg

@jerry I have to say, somewhat tangentially related, I am very grateful for your moderation work. I know the vile shit that is out there on the internets, and I have no reason to believe that it wouldn't be on Mastodon either. The fact that I rarely see it means that you, directly or indirectly, have already filtered it out.

Larry Garfield

@jerry 100%.

One interesting idea I've seen floated recently is a "known-good" list(s), so a new instance can federate *only* with those on some known good list(s). Then someone joining a server can see if their server is part of the "X-approved list" and decide to join or not.

Obviously not a complete solution, but are we maybe at the size where it's a part of the picture? Make new instances prove they're good, rather than wait for them to prove they're bad?

Jerry Bell :verified_paw: :donor: :verified_dragon: :rebelverified:​

@Crell it's antithetical to what the fediverse is intended to be, but it is a reasonable solutiion to this problem

Larry Garfield

@jerry Sadly, I think the preponderance of evidence suggests that a "wild west libertarian self-organizing environment" (the dream of the early-90s Internet) will devolve into a Nazi troll farm 100% of the time with absolute certainty.

It's a wonderful idea, but doomed.

The barrier to the accept-list could be low (eg, do they have a halfway decent TOS/CoC), but I don't think we have an alternative.

cf: peakd.com/community/@crell/why

Arquinsiel Teknogrot

@Crell @jerry I think the idea that an otherwise terrible person had like 20 years ago holds up pretty well: paying for initial access results in you having an investment in a service that encourages you to follow the rules to protect that investment. You can see this with how Something Awful has turned into a stable and mature forum with varied subforums and at least one thread for anything you can think of.

Of course the downside to that is that if the person setting the rules is terrible then the culture will be terrible and require a coup to fix, but... that seems to be a universal part of the human condition.

@Crell @jerry I think the idea that an otherwise terrible person had like 20 years ago holds up pretty well: paying for initial access results in you having an investment in a service that encourages you to follow the rules to protect that investment. You can see this with how Something Awful has turned into a stable and mature forum with varied subforums and at least one thread for anything you can think of.

Larry Garfield

@teknogrot @jerry "The culture of an organization is defined as the worst behavior its leadership is willing to tolerate."

No amount of federation will change that dynamic.

Arquinsiel Teknogrot

@Crell I think it does change it, but not for the better. As @jerry pointed out, the nature of the fediverse can hide the behaviour from some people resulting in a de-facto tolerance of behaviour worse than the leadership (in this case again @jerry) would actually accept, while denying them the tools to do something about it.

Federation may actually not be a good idea at all for social media.

Ryan Castellucci :nonbinary_flag:

@Crell @jerry Jerry, firstly, thank you for the thoughtful, nuanced take. As a person who does somewhat high profile activism, I appreciate that your efforts have resulted in me experiencing very little harassment here.

The problem with having a list of "approved instances" is that it makes personal/tiny instances untenable.

This really reminds me of issues with email hosting and spam control - I run a personal email server and I have problems with providers assuming everyone is a spammer unless they have a history of sending non-spam.

How to establish that history if you can't send, though? If you're a business, you can pay protection money to certain companies that will bootstrap your reputation, but I can't afford that.

APIs for publishing opinions on other instances could help, if consumed "web of trust" style - you'd have two values, how much you trust the instance itself, and how much you trust it's trust decisions. These values might be negative. I'm not sure how well this would work in practice.

@Crell @jerry Jerry, firstly, thank you for the thoughtful, nuanced take. As a person who does somewhat high profile activism, I appreciate that your efforts have resulted in me experiencing very little harassment here.

The problem with having a list of "approved instances" is that it makes personal/tiny instances untenable.

Ryan Castellucci :nonbinary_flag:

@Crell @jerry Meanwhile, yesterday someone went out of their way on the birdsite to tag me in a post calling me an assortment of slurs.

Ryan Castellucci :nonbinary_flag:

@Crell @jerry speaking of the birdsite, before the API got locked down, I spent a fair amount of effort building network analysis tools to proactively identify and block bigots. Turns out assholes like to follow each other.

It was deeply satisfying when news about me came out and a bunch of people who had never interacted with me and weren't on any shared blocklists were complaining about being blocked by me.

Ryan Castellucci :nonbinary_flag:

@Crell @jerry I also had an IFF (identify friend or foe) script that would pull following/follower data, compare against my my own block, mute, following, and follower lists, and compute a score.

Rickd6

@ryanc @jerry @Crell perhaps there’s a way to make this available to members so they can implement it when they sign up?

Ryan Castellucci :nonbinary_flag:

@Rickd6 @jerry @Crell It's not clear to me that it would work here. Part of the issue is it sounds like the trolls often spin up new disposable instances for trolling purposes and wouldn't have useful data.

Jerry Bell :verified_paw: :donor: :verified_dragon: :rebelverified:​

@ryanc @Crell I used to work with someone who had this saying "the operation was a success, unfortunately the patient died". I feel like it's that sort of situation - we could indeed solve the problem by killing the patient.

Larry Garfield

@ryanc @jerry The spam analogy is very apt, I think, given Fediverse is often analogized to email.

And the wild-west-anyone-runs-anything approach is largely a failure there, too. I also used to run a personal mail server. It only worked if I proxied every message through my ISP's mail server.

A similar network-of-trust seems the only option here, give or take details.

Larry Garfield

@ryanc @jerry In the abstract sense, we're dealing with the scaling problems of the tit-for-tat experiment dynamics. Reputation-building approaches to social behavior only work when the # of actors is small enough that repeated interactions can build reputation. The Internet is vastly too big for that, just like society at large.

Ryan Castellucci :nonbinary_flag:

@Crell @jerry there's several phd thesis level problems to solve here

Ryan Castellucci :nonbinary_flag:

@Crell @jerry My big concern with the web of trust model is that it's complicated, and has lots of nontrivial decisions to make. An effective tool would probably have to distill the decision to trust/neutral/distrust and have a standard scoring algorithm, and notify admins of conflicting data.

Ryan Castellucci :nonbinary_flag:

@Crell @jerry I do think keyword/regex filters as a quarantine/alert admin thing would be helpful, but as mentioned up thread, part of the problem is people unkowingly joining instances that don't protect their users from harassment and not understanding why that's a problem. The guides saying "instance doesn't matter much" don't help.

Larry Garfield replied to Ryan Castellucci :nonbinary_flag:

@ryanc @jerry Yeah, the onboarding experience is definitely still a sore point. Like, I'd like to get my brother or the NFP I work with onto Mastodon, but I don't know what server to send them to. Mine isn't appropriate for them, mastodon.social isn't a good answer, and the alternative is... *citation needed*

Ryan Castellucci :nonbinary_flag: replied to Larry

@Crell @jerry Yeah, I've absolutely no idea what "general but friendly to members of frequently harassed groups" instances exist. This instance is really nice, as I've always been a hacker first and foremost. Yes, I'm queer on several dimensions and open about it, but most of the time I don't want to focus on that.

Jonathan Yu

@jerry @Crell In the early days of IRC (I wasn't there for it), my understanding was that EFnet was meant to be similar - allow any server to join - and hence their name Eris Free Net. But they've since changed their policy given the risks, and I think that's one of few reasonable approaches. Increasing the friction for everyone sucks, but it disproportionately hurts trolls, so I guess it may be worthwhile?

Imogen

@jawnsy

EFNet stands for Eris Free Network. Eris is / was eris.berkeley.edu who got booted from the network for controversial behavior.

EFNet was very cliquish and was known as a good old boy’s network.

The one thing the developers got right was Glines (Global blocking).

I hope you enjoyed this short history lesson.

jz.tusk

@jerry @Crell

I really appreciate your top post - it clarified a lot for me.

I'm a total noob to the Fediverse, so I don't know what core tenet goes against using allow lists as opposed to deny lists. Is there an easy answer you can give me?

Jerry Bell :verified_paw: :donor: :verified_dragon: :rebelverified:​

@jztusk @Crell I think this reply is a very good example of why that would be a problem: mk.aleteoryx.me/notes/9wexilu5

Basically, the fediverse is premised on the idea of many people running their own personal instance, and in adopting an allow-list model, we effectively make it difficult or impossible for these individual instances to participate.

Jaz (IFTAS)

@jerry @jztusk @Crell

Why not both? Some servers can run open federation, some can run allowlist-only, some can run in quarantine-first mode, and over time I'm sure we'll see shared lists, reputation signals, and trusted upstream servers to help manage the onboarding/allowing.

"Disallow all, but allow all servers already allowed by x, y and z" is one way to approach.

Almost none of the asks I've seen are either/or propositions, they are generally admin options to enable or not.

Raccoon at TechHub :mastodon:

@jaz @jerry @jztusk @Crell
@jsit was talking about this the other day, and I keep feeling like I shot this idea down too soon...

social.coop/@jsit/112876102135

...but maybe that would be a good plan for some of these new and small instances, especially the ones that are trying to be safe spaces for minority groups. Get some momentum going, get some connections with other servers, get some contact with other server staffs, maybe eventually open it up.

Yeah, I think a federated whitelist would be a good idea.

Still, I'm looking at how many of these groups making block lists purport to be going after bigotry and harassment or whatever, but then you see them blocking a bunch of queer instances or black instances or something, and I wonder who might actually be trusted with this sort of thing. I can even imagine TechHub and Infosec showing up because someone with list access doesn't like the "techbros" or whatever...

@jaz @jerry @jztusk @Crell
@jsit was talking about this the other day, and I keep feeling like I shot this idea down too soon...

social.coop/@jsit/112876102135

...but maybe that would be a good plan for some of these new and small instances, especially the ones that are trying to be safe spaces for minority groups. Get some momentum going, get some connections with other servers, get some contact with other server staffs, maybe eventually open it up.

Jay

@Raccoon @jaz @jerry @jztusk @Crell The refrain of “allowlists/blocklists are bad because it means you won’t hear from me” misses the point: This is why they are GOOD.

People don’t have a “right” to talk to your instance, this is a privilege that should be EARNED. And the protection of vulnerable people on social media is more important than my ability to make sure they can see my dumb posts.

This is not antithetical to the Fediverse. Choosing which instances to federate with is central to it!

Raccoon at TechHub :mastodon:

@jaz @jerry @jztusk @Crell @jsit
It also occurs to me that this can't be run by the instances using it, because they won't be able to see new instances to whitelist, which means you're going to need a few large servers to be the "Canary in the coal mine" for these instances. I feel like Tech Hub, with our somewhat squeamish block policies, could be a really useful server here, and I'd be happy to help maintain such a list.

What I think we need is some framework for how this list is put together and maintained, without too much overhead. We would need to account for the fact that such a list needs to be absolutely huge, and that while it should prioritize safety, there is an ethical obligation to get as many servers on it as possible.

As I told Jsit, it might be useful for someone to make this list now, just so we can see what it looks like.

@jaz @jerry @jztusk @Crell @jsit
It also occurs to me that this can't be run by the instances using it, because they won't be able to see new instances to whitelist, which means you're going to need a few large servers to be the "Canary in the coal mine" for these instances. I feel like Tech Hub, with our somewhat squeamish block policies, could be a really useful server here, and I'd be happy to help maintain such a list.

Larry Garfield

@jaz @jerry @jztusk I think we collectively need to come to terms with the fact that most people have no interest, desire, skill, or inclination to run their own anything, and will happily make it someone else's problem in return for money, ad data, or just not GAF.

Less than 0.1% of people will run their own mail/Fedi/Identity/app/file server.

We need to stop making fetch happen. It's not going to happen.

Jaz (IFTAS)

@Crell @jerry @jztusk (Back of napkin caveat)

Currently seeing one server per 500 accounts. Writ large that's 10M AP servers for 5B accounts.

Several hundred million not-WP.com WordPress sites might offer an example. Value-add hosting options abound for WordPress, so it's partially self-hosted insofar as it's somewhat self-managed by tens of millions.

Mix of managed, shared, and self-hosting for AP services coupled with a rich plugin framework is how I imagine adoption could be supported.

maybenot

@jaz @jerry @jztusk @Crell

so, quarantining new instances (increased scrutiny / aggressive automatic moderation / keyword flagging) after which they would be put on either block- or allow-lists?

this would (theoretically) not preclude small instance participation, and would increase cost/difficulty for bad actors.

different instances could have their own metrics/rules for this, as you say giving a variety of perspectives on the new one's behavior

maybenot

@jaz @jerry @jztusk @Crell

then the decisions of known/old/large ones can be weighted by everyone to make their own decisions

this leads logically to some kind of web-of-trust / instance-reputation-scoring system down the line. this can be both good and bad, but if surfaced somehow could serve as a direly needed instance selection aid for new users.

ie: example.net is aggressively blocking this type of behaviour, but is above average blocked by others on this other metric

jz.tusk

@jaz @jerry @Crell

I *so* want to come up with an "algebra of trust", where we can say "I trust entity X, but only to post", and "I trust entity Y, and anybody they trust too", and let the computer figure out each individual "do I trust A to do B?".

I haven't formalized it at all, and there's a risk of being computationally expensive, and someone would have to do the specialization for Mastodon, but I'd love to see it.

Larry Garfield

@jztusk @jaz @jerry You're trying to solve a human problem with math. That is doomed to failure, no matter what the crypto-bros keep selling.

All human systems require human maintenance and decision making. Assisted, maybe, but never, ever think you can replace human decision making about human activities.

Jaz (IFTAS)

@jerry @Crell

>it's antithetical to what the fediverse is intended to be

I believe many see the Fediverse as way to build an alternative that is both safe and open.

"No Nazis" is not 100% open, it is opinionated and less-than-fully-open. Safe beats open in the case of nazis, right?

Many of these issues come down to which to prioritise, safe, or open?

My preference is safety first, open second.

Both are good, but sometimes we need to give a little bit on the open to preserve safety.

feld
@jerry @Crell closed federation systems tend to shrink over time and become their own thing, at least from what I've noticed over the years.
Petulant Crow Person (Powered by UltraSPARC)

@Crell@phpc.social @jerry@infosec.exchange this is problematic for anyone like me, who hosts a personal instance. it would be an obscene increase in the barrier-to-entry

Deviant Ollam

@jerry This is all super well put and I thank you for sharing. 👍

(I also thank you and so many other admins for all the hard work being done to keep the Fediverse running smoothly!)

JamesWNeal

@jerry Because I've not seen the problem and really don't remember what was presented to me when I first joined this instance, so this may be an ignorant Q, but is there not a way for an instance to throw up a FAQ that specifies what it does and does not do to moderate/block/ban to anyone at the start of signup? A "fair warning, you're on your own here" alert?

Jerry Bell :verified_paw: :donor: :verified_dragon: :rebelverified:​

@JamesWNeal the ability to do that without getting into the code is somewhat limited. Mastodon makes you agree to the rules of the server your signing up to, so that would be the best place to put such a thing, IMO

JamesWNeal

@jerry OK, and is there a way to track that the prospective new user at least scrolls to the botom of the user agreement, or can they just click through? I'm guessing that even if there is, that's once again up to the administrator?

Jerry Bell :verified_paw: :donor: :verified_dragon: :rebelverified:​

@JamesWNeal the new account owner has to check a box acknowledging acceptance before proceeding, so the existence of an active account indicates they did acknowledge acceptance of the rules. Administrators can’t force people to read them, though.

JamesWNeal

@jerry Can administrators automatically send a "welcome" message to new users, something that not only reiterates the rules, but expands sufficiently to make certain the new user understands what they've just signed up for?

LionelB

@jerry

Thank you. That was a very helpful insight.

Two particular take aways are to encourage users who are targeted to switch to 'safe' instances.

Second, to look at safety enhancements which don't run counter to the distributed model.

Training, advice, resources, quarantine.

Andrzej

@jerry@infosec.exchange

however many times those block lists become polluted with the politics of the maintainers...
This wouldn't be such a problem imo, but from what I've seen they tend to get polluted more by the interpersonal beefing of people with, shall we say, 'big personalities'

Veda Dalsette

@jerry Blocking assholes is as much a daily chore for us as deleting political/commercial "give me money" emails. It's something we have to accept, because, like disease, they never go away entirely.

Don Holloway

@jerry i am grateful for the good work that you do.

Violet Rose

@jerry
"A major factor in your experience on the fediverse has to do with the instance you sign up to."

I completely agree. I rarely encounter harassing posts, and I know a significant factor is being with a small, actively-moderated instance.

Because the fediverse is dstributed among such a huge number of instances, moderation has to be a shared responsibility. Mastodon developers could do more to beef up moderation tools, but admins have a huge responsibility for what shows up in their space. Even with the most diligent moderation, users will have to take some action.

As a member of a marginalized group, the formula that has worked for me (so far) is choosing a great instance, communicating with the admin (who is very responsive), and accepting follow requests if I think the person jibes with my vibe, and blocking/reporting obvious trolls.

Crossing my fingers this keeps working for me.

@jerry
"A major factor in your experience on the fediverse has to do with the instance you sign up to."

I completely agree. I rarely encounter harassing posts, and I know a significant factor is being with a small, actively-moderated instance.

Because the fediverse is dstributed among such a huge number of instances, moderation has to be a shared responsibility. Mastodon developers could do more to beef up moderation tools, but admins have a huge responsibility for what shows up in their space. Even...

Jaz (IFTAS)

@jerry Reply controls and ability to block direct mentions from unknowns - with safe defaults - are dev work that needs effort but if prioritised at the platforms level would be hugely helpful to help people set up new accounts that are not wide open to abuse.

Most platforms do not allow blanket access to your inbox for direct comms and notifications. Most platforms offer some level of control over who can reply.

These are fundamental and need funding and love to make it happen here.

Skjeggtroll

@jerry

"Is it the instance administrator's fault for not blocking troll instances/troll accounts?"

Wasn't that how it worked on Usenet? Usenet providers were expected to accept and handle abuse complaints against its users, and if they failed, the community of News admins could declare a UDP (Usenet Death Penalty / i.e. defederation) against that provider.

Relationships between users and providers were more substantial and less easily replaced, so providers could put more force behind

Skjeggtroll

@jerry

it when demanding that their users behaved, and providers could also usually directly tie a Usenet post to a specific, identifiable person -- a particular student or employee, or a particular customer.

Even so, the system eventually failed.

Still, I'm not sure a federated, decenteralised system can work at all without, at some level, the idea that "if you don't make your users play nice, we're not going to play with you at all."

starkraving666

@jerry thank you for explaining this - this sheds light on so much of what I didn't understand (I think because I'm on a well-moderated instance).

Raccoon at TechHub :mastodon:

@jerry
I think what's most frustrating to me about it is, even as the head of moderation for one of the larger instances, I really don't know what I can do about it. Like, I can keep telling people that yes this exists, and try to coordinate some response to it, but I can't do anything about things I don't see or that are happening on servers we've already blocked.

Currently, I'm trying to make our blocklist, which is supposed to be the worst actors, into a public resource that these server admins can merge into their own block lists and move on... But I don't know how much that's actually going to help, and I don't know what else I can do.

Edit: also, encouraging people to report, and trying not to send people to poorly moderated instances where reports are ignored, are a key part of this process.

@jerry
I think what's most frustrating to me about it is, even as the head of moderation for one of the larger instances, I really don't know what I can do about it. Like, I can keep telling people that yes this exists, and try to coordinate some response to it, but I can't do anything about things I don't see or that are happening on servers we've already blocked.

Katrina Katrinka :donor:

@jerry
"A major factor in your experience on the fediverse has to do with the instance you sign up to. Despite what the folks on /r/mastodon will tell you, you won't get the same experience on every instance. Some instances are much better keeping the garden weeded than others. If a person signs up to an instance that is not proactive about blocking trolls, they will almost certainly be exposed to the wrath of trolls."
[...]
"I think the ambiguity here is why we continue to see the problem repeat itself over and over - there is no obvious owner nor solution to the problem."

I think you've accurately diagnosed the problem and proposed a solution. We need to well advertise and fund instances that "keep their gardens weeded" so that new people joining Mastodon can have a good experience from the beginning.

I was lucky enough to be recommended your instance when I first joined and have had a great Mastodon experience.

@jerry
"A major factor in your experience on the fediverse has to do with the instance you sign up to. Despite what the folks on /r/mastodon will tell you, you won't get the same experience on every instance. Some instances are much better keeping the garden weeded than others. If a person signs up to an instance that is not proactive about blocking trolls, they will almost certainly be exposed to the wrath of trolls."
[...]
"I think the ambiguity here is why we continue to see the problem repeat...

Phil M0OFX

@jerry It's absolutely a problem, and it really does need dealing with. I think as a start, it would be nice if the instance lists gave an idea on what the defed policies of the instances looked like. Do they defer entirely to users? Block the worst offenders?
I think getting the introductions better would be good too - be clear that it's an issue and how to deal with it (pick a better instance). But right now the official page doesn't really talk about this, and that's where the eyes are.

Carlos Solís
As a self-hosted single-tenant user, this worries me a little - being banned from everywhere by default just because I want some autonomy over my data doesn't look like a good prospect. What kind of proof will I be expected to give to be accepted, to which entities, and how often? Because having to send a full background check document to every single instance I follow will take a while.
AAKL

@jerry Well-said. Social media is a reflection of society. There will be trolls and sociopaths wherever they smell an opportunity - or a perceived weakness. But Mastodon does have some tools to counter offensive people. These tools could stand to be improved, but comparing Mastodon to commercial platforms, some of which have become havens to scammers, data abusers, prevaricators, and those unhinged from reality, is not a fair comparison for people to make.

Bill Hooker

@jerry Thanks for laying that all out. I've been thinking along similar lines and your post is a great clarifier and sanity check for me.

I'll add just one thing to the comments above: people of colour *are* speaking up about this and being clear about what they need. Some solutions are technical (reply controls) and some are social (wypipo quit sealioning, go find and banhammer the nazis, etc). Following #blackmastodon is a way to listen in and find ways to help.

leberschnitzel :ha:

@jerry IMHO the big pro on the fediverse is the freedom that everyone can say what they want, positive or negative. I much rather mute or block people and instances I don't want to see than to not have the possibility to see them or others not to see me. But I also know that I'm in a privileged position. This for sure will be different for poc, women, non-cis and easier to identify non hetero people that regularly get harassed.

TomKrajci 🇺🇦 🏳️‍🌈 🏳️‍⚧️

@jerry

"At every step, things are working as designed."

Note what Cindy Gallop says: "Those of us who are at risk every single day—women, Black people, people of color, LGBTQ, the disabled—design safe spaces, and safe experiences."

How do we improve the Mastodon/Fediverse design process? (How do we influence things to get moving in that direction faster?)

fastcompany.com/90539655/dear-

Binsk​ ♾️​:donor:

@jerry I wonder if a dual-tier blocklist that propagates might be a solution.

For the "trusted" tier, an admin can subscribe to a list where blocks occur automatically when forwarded from a trusted instance. It's still an instance block, but it'd block-and-notify, which the other admin could monitor from time to time.

For the "other" tier, blocks would be advertised to an admin via feed, but not automatically implemented. In that way, the instance would require human intervention to evaluate and/or block. If an instance gives good recommendations for blocks, this might be data for adding their instance to the trusted list.

Vetting a trusted list would be per-instance, but the effort would be far lower. If you trust instance X, you set it and forget it. For new instances, interactions between admins as well as mod posture would govern whether you'd choose to trust their instance blocks or just get notifications. Even "other" instance block notifications could lower effort by bringing the instance to others' attention.

@jerry I wonder if a dual-tier blocklist that propagates might be a solution.

For the "trusted" tier, an admin can subscribe to a list where blocks occur automatically when forwarded from a trusted instance. It's still an instance block, but it'd block-and-notify, which the other admin could monitor from time to time.

FediThing 🏳️‍🌈

@jerry

This is really insightful, thank you for this. I hope it helps more people understand that there is a problem, and gets them thinking about solutions.

FediThing 🏳️‍🌈

@jerry

Hey @gotosocial this might be a useful reference while you're designing your platform? (Apologies if this is patronising, I'm sure you've already considered a lot of this.)

Andrew Leahey

@jerry

Do you happen to maintain a list of instances infosec.exchange has defederated from?

I know you mention its only 98% effective, but for a smaller instance like us (esq.social) that might go a long way.

Howard Cohen for Harris/Walz

@jerry If there were a reputation system, like stack exchange, trolls would get voted down until their posts are easier to ignore or filter out. And "good people" (always subjective) would earn reputation points for people approving of what they post. We already have a "like" concept. Perhaps it isn't what the creators of the fediverse originally wanted, but a dislike button would be an essential part of reputation management.

Colin Oatley

@jerry @briankrebs This is enlightening. Thank you. A clear problem statement is a necessary step towards solving this.

skaphle

@jerry What can I do to help if I don't see anything like that happening? For example, because the admin on my instance already blocked the troll's instance?

Aurora 🏳️‍⚧️🏳️‍🌈

@jerry@infosec.exchange I really appreciate you taking the time to write up exactly what the issue is and why it's happening.

I've frequently been the "well meaning white person" on the sidelines wishing I could help well also being on an instance that is extremely good at blocking bad actors.

As you said this is fundamentally a situation between the instance, it's federated neighbors, and the victim, but it still sucks that there is nothing that can be done to help.

Hopefully advocating for new moderation tools and spreading awareness of why this happens does more than nothing to help along this process!

@jerry@infosec.exchange I really appreciate you taking the time to write up exactly what the issue is and why it's happening.

I've frequently been the "well meaning white person" on the sidelines wishing I could help well also being on an instance that is extremely good at blocking bad actors.

As you said this is fundamentally a situation between the instance, it's federated neighbors, and the victim, but it still sucks that there is nothing that can be done to help.

Hopefully advocating for new...

Rora Borealis

@jerry The fediverse isn't the silver bullet or utopia some made it out to be.

It's pretty wild out there in the fediverse if the admin isn't properly hands-on. The server I joined I chose because the admin is protective and fairly proactive.

It makes a difference when admins are willing to do things like defederate from instances that have high levels of harassers.

We have to work together to create and protect our communities. It's hard work, and there are no easy answers. Us white folks need to recognize the problem and ask how to help out. If we can't do that, we need to sit down, shut up, and let others handle it.

@jerry The fediverse isn't the silver bullet or utopia some made it out to be.

It's pretty wild out there in the fediverse if the admin isn't properly hands-on. The server I joined I chose because the admin is protective and fairly proactive.

It makes a difference when admins are willing to do things like defederate from instances that have high levels of harassers.

Kee Hinckley

@jerry Thanks Jerry.

I joined this instance because I have a lot of very old friends in the infosec community, and I trust their opinions. When I saw them coming here, I knew it would be a sane, well-managed, and persistent instance. I haven't been disappointed.

Once I get my cash flow issues straightened out, I'll be sending another donation. And I encourage everyone else on infosec.exchange to do the same. Hell, if you have a job that pays you well, and you're hosted elsewhere, please consider donating as well. Solid instances improve the entire network.

@jerry Thanks Jerry.

I joined this instance because I have a lot of very old friends in the infosec community, and I trust their opinions. When I saw them coming here, I knew it would be a sane, well-managed, and persistent instance. I haven't been disappointed.

Once I get my cash flow issues straightened out, I'll be sending another donation. And I encourage everyone else on infosec.exchange to do the same. Hell, if you have a job that pays you well, and you're hosted elsewhere, please consider...

Tisha Tiger

@jerry Blocklists can help a lot, but they always are corrupted by some human ego.

You said it well 👏

CohenTheBlue

@jerry I've floated the idea to create an anti-racist addendum to the mastodon covenant

joinmastodon.org/covenant

or a similar agreement. No idea who would be the person to lead such an effort.

Mark T. Tomczak

@jerry The endgame is likely that we start seeing Fediverse servers spin up with an allow-list that auto-ignores / auto-blocks all other servers until and unless an admin explicitly admits them.

We saw a similar pattern with email when the spam problem began to tilt towards intractable (in that while you can spin up your own email server, good luck getting Google or Microsoft's servers to accept and transmit your messages if you have no history at all).

(What's that Gibson quote? "It started with an inverted killfile...")

@jerry The endgame is likely that we start seeing Fediverse servers spin up with an allow-list that auto-ignores / auto-blocks all other servers until and unless an admin explicitly admits them.

We saw a similar pattern with email when the spam problem began to tilt towards intractable (in that while you can spin up your own email server, good luck getting Google or Microsoft's servers to accept and transmit your messages if you have no history at all).

Zvonimir Stanecic

@pixelate @jerry just the american problem. Why we don't have problems with it?

Go Up