Email or username:

Password:

Forgot your password?
Top-level
Ryan Castellucci :nonbinary_flag:

@Crell @jerry Jerry, firstly, thank you for the thoughtful, nuanced take. As a person who does somewhat high profile activism, I appreciate that your efforts have resulted in me experiencing very little harassment here.

The problem with having a list of "approved instances" is that it makes personal/tiny instances untenable.

This really reminds me of issues with email hosting and spam control - I run a personal email server and I have problems with providers assuming everyone is a spammer unless they have a history of sending non-spam.

How to establish that history if you can't send, though? If you're a business, you can pay protection money to certain companies that will bootstrap your reputation, but I can't afford that.

APIs for publishing opinions on other instances could help, if consumed "web of trust" style - you'd have two values, how much you trust the instance itself, and how much you trust it's trust decisions. These values might be negative. I'm not sure how well this would work in practice.

18 comments
Ryan Castellucci :nonbinary_flag:

@Crell @jerry Meanwhile, yesterday someone went out of their way on the birdsite to tag me in a post calling me an assortment of slurs.

Ryan Castellucci :nonbinary_flag:

@Crell @jerry speaking of the birdsite, before the API got locked down, I spent a fair amount of effort building network analysis tools to proactively identify and block bigots. Turns out assholes like to follow each other.

It was deeply satisfying when news about me came out and a bunch of people who had never interacted with me and weren't on any shared blocklists were complaining about being blocked by me.

Ryan Castellucci :nonbinary_flag:

@Crell @jerry I also had an IFF (identify friend or foe) script that would pull following/follower data, compare against my my own block, mute, following, and follower lists, and compute a score.

Rickd6

@ryanc @jerry @Crell perhaps there’s a way to make this available to members so they can implement it when they sign up?

Ryan Castellucci :nonbinary_flag:

@Rickd6 @jerry @Crell It's not clear to me that it would work here. Part of the issue is it sounds like the trolls often spin up new disposable instances for trolling purposes and wouldn't have useful data.

Rickd6

@Crell @ryanc @jerry is it possible to ‘fight fire with fire’ in that when someone identifies that they are receiving harassment a group of individuals- obviously a prearranged group- can be contacted who will respond overwhelmingly to the harassing individual to call them out? Sounds childish when said out loud and may make them dig in further but …..

Larry Garfield replied to Rickd6

@Rickd6 @ryanc @jerry "My gang is bigger than your gang" is the approach used in a failed society.

Jerry Bell :verified_paw: :donor: :verified_dragon: :rebelverified:​

@ryanc @Crell I used to work with someone who had this saying "the operation was a success, unfortunately the patient died". I feel like it's that sort of situation - we could indeed solve the problem by killing the patient.

Larry Garfield

@ryanc @jerry The spam analogy is very apt, I think, given Fediverse is often analogized to email.

And the wild-west-anyone-runs-anything approach is largely a failure there, too. I also used to run a personal mail server. It only worked if I proxied every message through my ISP's mail server.

A similar network-of-trust seems the only option here, give or take details.

Larry Garfield

@ryanc @jerry In the abstract sense, we're dealing with the scaling problems of the tit-for-tat experiment dynamics. Reputation-building approaches to social behavior only work when the # of actors is small enough that repeated interactions can build reputation. The Internet is vastly too big for that, just like society at large.

Ryan Castellucci :nonbinary_flag:

@Crell @jerry there's several phd thesis level problems to solve here

Ryan Castellucci :nonbinary_flag:

@Crell @jerry My big concern with the web of trust model is that it's complicated, and has lots of nontrivial decisions to make. An effective tool would probably have to distill the decision to trust/neutral/distrust and have a standard scoring algorithm, and notify admins of conflicting data.

Ryan Castellucci :nonbinary_flag:

@Crell @jerry I do think keyword/regex filters as a quarantine/alert admin thing would be helpful, but as mentioned up thread, part of the problem is people unkowingly joining instances that don't protect their users from harassment and not understanding why that's a problem. The guides saying "instance doesn't matter much" don't help.

Larry Garfield replied to Ryan Castellucci :nonbinary_flag:

@ryanc @jerry Yeah, the onboarding experience is definitely still a sore point. Like, I'd like to get my brother or the NFP I work with onto Mastodon, but I don't know what server to send them to. Mine isn't appropriate for them, mastodon.social isn't a good answer, and the alternative is... *citation needed*

Ryan Castellucci :nonbinary_flag: replied to Larry

@Crell @jerry Yeah, I've absolutely no idea what "general but friendly to members of frequently harassed groups" instances exist. This instance is really nice, as I've always been a hacker first and foremost. Yes, I'm queer on several dimensions and open about it, but most of the time I don't want to focus on that.

Paul_IPv6

@ryanc @Crell @jerry

same same here. run small email server.

while i can sympathize with consumer email providers that block-specific only doesn't scale nearly as well as block-all/allow-only-specific, we hit what you say. how do i prove i'm "clean" if i can't send to you.

i do think we need to have these discussions, possibly be willing to give up some cherished ideals, but most critically, we need to get those most at risk of abuse involved at every stage in discussion/design/test/deploy.

Go Up