@thisismissem @dansup @pixelfed It's not a great idea on it's own.. more of a defense in depth thing
but I'd imagine we could have a risk score with metadata
{score: 5, keywords:abusive:3, trigger: norelationship:2}
Top-level
@thisismissem @dansup @pixelfed It's not a great idea on it's own.. more of a defense in depth thing but I'd imagine we could have a risk score with metadata {score: 5, keywords:abusive:3, trigger: norelationship:2} 4 comments
The more data that you want to provide the decision making process, the more expensive it becomes to execute, and the more likely you are to need to maintain some form of state. e.g., rolling windows for types of activities, that's a whole bunch of state that needs to be stored temporarily. That is to say, the more data to evaluate & more state to consider/store, the slower the rules become to execute, potentially requiring asynchronous instead of synchronous processing @thisismissem @dansup @pixelfed id say a lot of this can be done retroactively as well |
@thisismissem @dansup @pixelfed I could think of a bunch of risks
trigger:limitedinstance:2
trigger:newaccount:2
etc etc.... having one risk isn't enough to push it over the edge to a filter.
edit: if 5 is a threshold (people can move it up/down as required per account) but receiving a toot with a slur from a friend on a non-limited instance would be fine... so mutuals can swear away