Email or username:

Password:

Forgot your password?
Top-level
Brian Campbell

@TechConnectify It may be irritating, but it's what's available with the Mastodon/Fediverse software that is available now.

It would be possible to add automated filtering to Mastodon; but it's a big, thorny project. And Mastodon has just two full time developers, as far as I know, plus a number of part time volunteer contributors.

Meanwhile, it took Twitter quite a while to develop this kind of automated filtering, with over a thousand engineers and tons of capital.

5 comments
Brian Campbell

@TechConnectify But let's think for a minute about what such a feature would look like in Mastodon, if it existed.

First of all, you'd want it to be more transparent than what exists on Twitter and other platforms; one of the big draws of Mastodon over other platforms is the lack of an opaque algorithm that tilts the scales. Of course, transparency has it's problems too, sometimes making it easier to game, but in general people have figured out how to game opaque algorithms too.

Brian Campbell

@TechConnectify So for it to be transparent, the algorithm, and it's training data should be transparent.

You also need to decide what signals you are going to include in your model. Here federation throws a wrench in things, as one server doesn't have immediate access to a lot of data on users from other servers.

Likes and boosts are fairly public, so you could probably use those as one bit of signal. Some sentiment analysis on the text could be a signal.

Brian Campbell

@TechConnectify Things like mutes/blocks are negative signals that could be used, but those are only available within a given server, so your servers filter could only use mutes and blocks by other users on your server as a form of input. I suppose that there might be enough data on larger servers for this to be reasonable.

Anyhow, it's an interesting problem, but I imagine it would take quite an effort to build something that would be useful.

Brian Campbell

@TechConnectify Next question is who would work on or fund such an effort? At some point, you need someone to actually hire folks to work on this, or have someone with free time and an itch to scratch to sit down and do it. I feel like many of the folks impacted by this (larger accounts, or accounts more likely to receive harassment), aren't necessarily the ones who would have time and skills to work on it.

Brian Campbell

@TechConnectify One thing that might work to inspire people to work on it is some kind of prize; I remember the Netflix prize was an early example of motivating teams to work together on automated recommendation algorithms that worked fairly well.

I wonder if it would be possible to do something like that for Mastodon, as a way of encouraging development of filtering algorithms for Mastodon?

Go Up