Email or username:

Password:

Forgot your password?
Top-level
Doug

@jerry
I agree with everything you observe, the cycle is both predictable and all too frequent.

What concerns me the most, and I will pick on Mastodon here as the predominent platform, the devs do not sufficiently consider safety as a priority, nor seemingly as a factor in their design decisions. It feels like it would take a fork to properly implement safety mechanisms to counter the apparent race to "help build engagement".

9 comments
Michael Stanclift

@doug @jerry I'm going to stand up for the devs here and say that they absolutely do factor in these things, just not always in the ways that are most apparent. There are a number of features that don't get added (at least as quickly as folks demand) specifically because of their impact on user privacy, safety, security, etc. (Quote toots, for example.)

There's a triad of engagement, safety, and accessibility that has to be factored into everything. Then how those features are maintained going forward.

@doug @jerry I'm going to stand up for the devs here and say that they absolutely do factor in these things, just not always in the ways that are most apparent. There are a number of features that don't get added (at least as quickly as folks demand) specifically because of their impact on user privacy, safety, security, etc. (Quote toots, for example.)

Jerry Bell :verified_paw: :donor: :verified_dragon: :rebelverified:​

@vmstan @doug Additionally, I am not sure what additional safety mechanisms are missing, to be honest. Perhaps making block lists more frictionless? Allowing admins to block certain words? (Which btw, would cause it's own set of backlash for flitering out legitimate use of some words)...

Renaud Chaput

@jerry word-based filtering has many many issues. As server blocklists do. Before having tools that reinforce this, we want those tools to not be invisible to users and provide some auditing. Not doing so, in our experience, creates very bad experiences for users.
Add the fact that being a federated network makes most of the things much more difficult to implement properly.
@vmstan @doug

Renaud Chaput

@jerry and this is also why we introduced the severed relationship mechanism, as well as the (still needikg improvements) filtered notification system. Now that we have those, which allow more auditing and decision visibility, we will need to able to add more tools, like blocklist syncing.
@vmstan @doug

adamrice

@jerry @vmstan @doug Something that might help would be allowing individuals to subscribe to curated block lists, not just admins. Not sure how disruptive that would be to the fediverse.

Doug

@jerry
I think there are a lot of marginalised people - users, mods and admins - who would have a lot to say about additional safety features, and would appreciate being consulted in design and testing before it's released.
@vmstan

S Vermin Rose

@jerry As we all know, Trust and Safety is hard, and a challenge is that when it fails it hits some users far harder than others. The idea that it's unduly onerous on those users to block trolls is new to me - I'm not a domain expert. But I want to hear Black voices, so their problem is, to an extent, my problem. Could a mitigation be curated block lists? I have a foggy recollection of such a facility being available on a certain legacy microblogging platform.

Doug

@vmstan
I have utmost respect for the hard work if the devs, but I read the public roadmap and see barely any feature that relates to safety or accessibiliy.

I don't doubt it is going to be an aspect of some of the work, but read the original post of the thread, and where do we think anything is being actively worked or planned for that could alleviate the problems, for users or admins?

@jerry

Patrick Georgi

@doug Seeing Eugen's response to requests in that space over the years, I'm convinced it would take a fork, indeed. (and since it's his toy, he gets to choose what his take on Fediverse software development focuses on.)

Or moving to different software. For example, GoToSocial seems to be more interested in implementing safety features (with some done, some in the pipeline).

The observation that it takes a fork is usually where the story ends, though. I wonder how much of that is learned helplessness from similar campaigns over at Facebook, Tumblr, Twitter, Reddit et al where the only alternative to "complain and succeed in getting things on the roadmap" has been "complain and nothing happens".

The fediverse _does_ provide more options (such as forking), but they require somebody to take action.

@doug Seeing Eugen's response to requests in that space over the years, I'm convinced it would take a fork, indeed. (and since it's his toy, he gets to choose what his take on Fediverse software development focuses on.)

Or moving to different software. For example, GoToSocial seems to be more interested in implementing safety features (with some done, some in the pipeline).

Go Up