I'd probably go with building a feature specifically designed for mitigating spam or malicious URLs, over a more generalised system for something like that.
Especially because those rules / actions might need to be taken in some sort of adaptive approach.
But still, being able to preview and see what the filter catches or doesn't is still important.
@thisismissem @dansup @pixelfed It's not a great idea on it's own.. more of a defense in depth thing
but I'd imagine we could have a risk score with metadata
{score: 5, keywords:abusive:3, trigger: norelationship:2}