Email or username:

Password:

Forgot your password?
Top-level
dansup

@thisismissem @pixelfed I agree, I'm not sure this is the best approach in that case, but it could be useful in cases where spam domains are used.

Perhaps we could discuss on Matrix or Discord further?

6 comments
Emelia πŸ‘ΈπŸ»

@dansup @pixelfed

I'd probably go with building a feature specifically designed for mitigating spam or malicious URLs, over a more generalised system for something like that.

Especially because those rules / actions might need to be taken in some sort of adaptive approach.

But still, being able to preview and see what the filter catches or doesn't is still important.

:PUA: Shlee fucked around and

@thisismissem @dansup @pixelfed It's not a great idea on it's own.. more of a defense in depth thing

but I'd imagine we could have a risk score with metadata

{score: 5, keywords:abusive:3, trigger: norelationship:2}

:PUA: Shlee fucked around and

@thisismissem @dansup @pixelfed I could think of a bunch of risks

trigger:limitedinstance:2
trigger:newaccount:2

etc etc.... having one risk isn't enough to push it over the edge to a filter.

edit: if 5 is a threshold (people can move it up/down as required per account) but receiving a toot with a slur from a friend on a non-limited instance would be fine... so mutuals can swear away

Emelia πŸ‘ΈπŸ»

@shlee @dansup @pixelfed

The more data that you want to provide the decision making process, the more expensive it becomes to execute, and the more likely you are to need to maintain some form of state.

e.g., rolling windows for types of activities, that's a whole bunch of state that needs to be stored temporarily.

Emelia πŸ‘ΈπŸ»

@shlee @dansup @pixelfed

That is to say, the more data to evaluate & more state to consider/store, the slower the rules become to execute, potentially requiring asynchronous instead of synchronous processing

Go Up