Now to be fair, I don’t have a problem with algorithmic content filters if it’s transparent and therefore not a black box.
But it can’t be any social network’s main form of moderation.
There is no replacement for reporting and active monitoring.
Top-level
Now to be fair, I don’t have a problem with algorithmic content filters if it’s transparent and therefore not a black box. But it can’t be any social network’s main form of moderation. There is no replacement for reporting and active monitoring. 18 comments
It strikes me that Bluesky’s entire business plan is about selling a “marketplace of algorithms”. They also know there’s pushback on this. However, they believe that people ultimately like algorithms despite what they say. And that might be true for some. But that can’t be applied to everything. Sometimes you need a human to curate things. Especially with moderation. @atomicpoet reminds me of Pandora. Not all that novel. Also, elaborate algorithms can be overkill for many users who just want to follow who they want to follow, not get bombarded with bunch of stuff advertisers tell programmers the person "should" follow. Remember, Pandora's Box was considered a bad thing! @atomicpoet You could easily have a market place of algorithms on Mastodon right now. Just build it into the client. There's been an explosion of Mastodon client apps. You could build a client app with configurable algorithms to mine your feeds and present a curated feed. Bluesky isn't doing anything here that couldn't be done right now on Mastodon. Again, Bluesky’s approach seems to hinge on “let users define how the algorithm works”. Inevitability, I feel this is going to turn into a war of different factions trying to define what the algorithm does. @atomicpoet Why not just work off likes and boosts and weigh them aporopriately. Or evaluate your lists for keywords. There's a gazillion possibilites with having that much user intervention. @atomicpoet That may well be the outcome, but the underlying concept of user applied, label-based filtering and prioritizing is not a terrible concept on its face. The devil will surely be in the details. But to their credit, they are envisioning some constraints on the firehose as it scales. There is no such discussion of such things here (that I've heard), and that's concerning. Especially since implementing any approach could be politically daunting. @atomicpoet @atomicpoet As much as I personally agree, I’ve been seeing a fair few people bemone the lack of algorithmic feeds here. Maybe they’re actually much more popular than we realize, among people less similar to us? @philip I have no doubt that some people like them. But they're not for me, and I think the black box variety is dangerous. @atomicpoet They’re not for me either. But after having talked about this topic with more people lately, I’m convinced the majority of people actually want them, and we’re the odd ones out. @atomicpoet Trust and Safety run by humans is the only real solution for 2023! Love to be proven wrong :-) but i'm 99% right. Perhaps in 2030 or so I'll be wrong! |
Here’s the other thing. We assume that Bluesky deems the likes of Neo-Nazis as “political hate-groups”.
But how do we know this?
They don’t exactly say who’s a “political hate-group”.
Is Antifa a “political hate-group”? What about BLM? What about Planned Parenthood?
We don’t know who Bluesky’s algorithm is hiding—but we’re taking it at its word that it knows best.