Email or username:

Password:

Forgot your password?
Chris Trottier

#Bluesky just added algorithmic content filtering. And here I find the most obvious problem.

They give you the option to show violent hate groups.

Which means they know their platform could be used to spread hate but are offloading moderation to users and algorithms.

What can ever go wrong?

Well, it means that if people deliberately want the option to show violent political groups, Bluesky has given them the welcome mat.

And based on what happened on Twitter, we know what happens.

37 comments
Phil Stevens

@atomicpoet It also lets the whole world know that Dorsey is OK with running a nazi bar. If they have the algorithmic ability to identify these groups, they have the ability to ban them.

Chris Trottier

@phil_stevens Jack Dorsey isn’t running Bluesky. He is sitting in the board of directors, though.

Phil Stevens

@atomicpoet He makes a good enough proxy and is the de facto figurehead.

Chris Trottier

There’s also the problem that political hate-groups are very good at getting around algorithmic content filters.

If they want to push their agenda, it’s easy for them to utilize dog whistles.

This is because algorithms can’t see what human moderators can.

Chris Trottier

Now to be fair, I don’t have a problem with algorithmic content filters if it’s transparent and therefore not a black box.

But it can’t be any social network’s main form of moderation.

There is no replacement for reporting and active monitoring.

Chris Trottier

Here’s the other thing. We assume that Bluesky deems the likes of Neo-Nazis as “political hate-groups”.

But how do we know this?

They don’t exactly say who’s a “political hate-group”.

Is Antifa a “political hate-group”? What about BLM? What about Planned Parenthood?

We don’t know who Bluesky’s algorithm is hiding—but we’re taking it at its word that it knows best.

Chris Trottier

It strikes me that Bluesky’s entire business plan is about selling a “marketplace of algorithms”.

They also know there’s pushback on this.

However, they believe that people ultimately like algorithms despite what they say. And that might be true for some.

But that can’t be applied to everything. Sometimes you need a human to curate things.

Especially with moderation.

blueskyweb.xyz/blog/3-30-2023-

OccidentalonPurpose

@atomicpoet reminds me of Pandora. Not all that novel. Also, elaborate algorithms can be overkill for many users who just want to follow who they want to follow, not get bombarded with bunch of stuff advertisers tell programmers the person "should" follow. Remember, Pandora's Box was considered a bad thing!

Mike Fraser :Jets: :flag:

@atomicpoet You could easily have a market place of algorithms on Mastodon right now. Just build it into the client. There's been an explosion of Mastodon client apps. You could build a client app with configurable algorithms to mine your feeds and present a curated feed. Bluesky isn't doing anything here that couldn't be done right now on Mastodon.

Chris Trottier

Again, Bluesky’s approach seems to hinge on “let users define how the algorithm works”.

Inevitability, I feel this is going to turn into a war of different factions trying to define what the algorithm does.

blueskyweb.xyz/blog/4-13-2023-

Mike Fraser :Jets: :flag:

@atomicpoet Why not just work off likes and boosts and weigh them aporopriately. Or evaluate your lists for keywords. There's a gazillion possibilites with having that much user intervention.

Shoq

@atomicpoet That may well be the outcome, but the underlying concept of user applied, label-based filtering and prioritizing is not a terrible concept on its face. The devil will surely be in the details. But to their credit, they are envisioning some constraints on the firehose as it scales. There is no such discussion of such things here (that I've heard), and that's concerning. Especially since implementing any approach could be politically daunting.

藤井太洋, Taiyo Fujii

@atomicpoet I imagine somebody claims to influencer to remove spam tags.

Lee 🌏

@atomicpoet
This i how I see it.
I log into Mastodon and have an option to:
See oldest (not viewed) toots first.
See newest toots first.
See toots sorted by. (I then have a list of options)
Most popular
Your favourites
Or choose another (algorithm, from your instance).
Would that not cover all scenarios and provide a choice?
How do you see that could be abused?

Chris Trottier

@MrLee I don’t think I’m arguing against how Mastodon sorts its feeds.

Deadmule

@atomicpoet
If "I'm the algorithm" why do I need a marketplace of ...

Philip Mallegol-Hansen

@atomicpoet As much as I personally agree, I’ve been seeing a fair few people bemone the lack of algorithmic feeds here.

Maybe they’re actually much more popular than we realize, among people less similar to us?

Chris Trottier

@philip I have no doubt that some people like them. But they're not for me, and I think the black box variety is dangerous.

Philip Mallegol-Hansen

@atomicpoet They’re not for me either. But after having talked about this topic with more people lately, I’m convinced the majority of people actually want them, and we’re the odd ones out.

roland

@atomicpoet Trust and Safety run by humans is the only real solution for 2023! Love to be proven wrong :-) but i'm 99% right. Perhaps in 2030 or so I'll be wrong!

Dani 🌻

@atomicpoet Wow, violent and bloody is an option.

Marlene CreepDish, VSOP

@atomicpoet Also political and hate groups shouldn’t be together. It’s reductive and confusing.

DELETED

@atomicpoet wait, what is it set to by default??

DELETED

@atomicpoet also, why even have a spam setting? Who opts in to seeing spam??

Oliphant

@atomicpoet

"May I interest you in some CSAM?"

-- BlueSky, apparently.

Andres Jalinton

@atomicpoet
@rysiek
Moderation teams are expensive, I appreciate every day for the hard work many admins and moderators do for the :fediverse: they are the truest ones.

Emelia 👸🏻

@atomicpoet I think for larger Mastodon or Fediverse services, it'd probably make sense to have some automatic flagging of content for moderators (rather than waiting for users to flag it)

Mike Masnick ✅

@atomicpoet i have a very, very, very different take on this... will write about it next week. I think this could be much smarter than you make it out to be

Chris Trottier

@mmasnick I look forward to what you have to say but my experience is that motivated people find a way around content filters. Example: “accountants” on TikTok.

𝓐𝓷𝓭𝔂𝓣𝓲𝓮𝓭𝔂𝓮 𓀤

@atomicpoet @mmasnick Į ɖőń'τ ʈɧıŋĸ ċėṅşøɾbôτś ɔᾆṅ ʀɛᾄԀ ʈᏂ¡ƨ.

Steven L. Johnson

@atomicpoet It's an intruiging concetp. I haven't used BlueSky, are they using their own ML models to categorize content into those categories? Or, crowd-sourcing the labeling?

Golda

@atomicpoet Sometimes violent content can be evidence of harms, also. I have posted violent content from sources.

slims :miyagi: 🧘‍♂️🎧

@atomicpoet jesus thats insane. the "violence" is also one step away from "murder: show, hide"

Go Up