Email or username:

Password:

Forgot your password?
Jon S. von Tetzchner

Are social media platforms that utilize algorithmic feeds inherently anti-free-speech?

In normal discourse, we all have a voice and we all have the choice whether to listen. In the algorithmic space, formulas are used to decide which voices to amplify and also which voices to de-amplify.

Here on Vivaldi Social (Mastodon), I see posts that people I follow post, including posts they choose to boost. No formulas needed. This is, IMHO, the closest to free-speech you can find, although there are some hate speech rules, which is IMHO a good thing. Basically the same as in normal, social discourse.

On Facebook I see posts Facebook thinks I want to see (and boy do they get it wrong), as well as a lot of AI generated nonsense. There are some posts in there that I am interested in, but most are not.

Twitter has turned into amplification of what Elon wants to amplify, mostly hate speech and misinformation. There are still interesting voices in there, but they are harder and harder to find.

What do you think?

#Facebook #Twitter #Mastodon #fediverse #SocialMedia

Anonymous poll

Poll

Agree
164
85.9%
Disagree
27
14.1%
191 people voted.
Voting ended 11 November at 13:13.
16 comments
NeadReport

@jon Yes. Whoever controls the algorithm controls the message.

mustachio

@NeadReport @jon as machines start making meaning... anything input to train it controls the story. how do we deal with this as humans 🤔 ie human knowledge shared has the same problem. i suppose we develop trust via reputation in knowing what sorts of things a person says and how valid it is within our own world view. so we are going to have to develop methods for characterizing a given llm and the way it *thinks*

mustachio

but then... do we just get turtles all the way down? how do you trust your trust model? i suppose you build it in a way you can prove it matches your trust model? *valdiidty*

Catweazle

@jon, every honest opinion is valid, but not so Fake-news, hate speech and xeno/homophobic propaganda, in where 🤬itter is the worst example, since Musk converted it in a far right wing personal blog.

Same in the newspapers/media, always need to look who pay the ink and always the need to contrast the content.Sadly independent media are currently the minority between ll the manipulated policy propaganda paid by big corporations.

PioneerSketches

@jon I use both Facebook and twitter, to a certain degree, but I only use groups and lists. That is the only way I can filter away the *trash* in the main feeds.

Random Tux User :fedora:

@jon
This may be overly pedantic, but I think free speech and the "right to an audience" are two different things. As long as the algorithm does not delete your posts and it can be seen if people look you up, you still have freedom of speech.

To me the right to an audience is not something that is universal. Hate speeches don't deserve an audience to hear it for example. As far as I'm concerned, say what you want, but that doesn't mean people have to hear about it.

Random Tux User :fedora:

@jon
Also platforms should be careful in suppressing speech to. Attention should be given to whether their speech harms people, and if does, is that a bad thing? Calling out wrongdoing can harm people, but it's a good thing. Using slurs to hurt people isn't.

I agree with you that Fedi does a decent job at this, since individuals have control on what kind of content they want to see. Though you can surround yourself with and spew hate, decent folks can just not listen to you.

Alexandro Lacadena 📷

@jon I don't think that algorithms are a bad thing itself, but what some managers do is the real problem. You could check on Twitter just the post from people that you follow, but we tend to check the normal feed itself. OFC, it turns out that the biggest platforms now are the most corrupted ones...

Michael Bishop ☕

@jon

I don't want an algorithm determining what I see and read. I just want a chronological feed.

Nalyd620

@jon One of the other reasons to leave FB was their ad content. It was almost non existent a couple of years ago, then leading up to the election we started seeing more Ad content. More Ads than actual posts from friends and family. That was another contributing factor of why I left.

Kevin

@jon I agree, because the algorithms are optimized for user engagement and not for the user. Posts that generate a lot of attention (emotional, fake news, clickbait, …) will dominate the algorithmic feed so FB, Twitter, Instagram, … can show more ads. Also by making the sorting very confusing people keep doom scrolling because they don’t know when they’re ‘done’, with a simple timeline people stop scrolling when they reach a post they’ve already seen.

Kevin

@jon There’s no free speech when we’re all overreacting on social media to posts that are generated to make people mad, sad, … Free speech on a platform should be regulated (I know, it sounds like a contradiction), so people can see the value of the medium again in stead of being sucked in some rabbit hole.

Tim Chambers

@jon I'd agree that closed, proprietary algos definitely bless some speech and surpresss others. Studies show YouTube's algo recommendation drives over 77 percent of all video views. What I hope for the Fediverse is that most clients default to just a reverse-chron algo, but then also let USERS create, share and tweak algorithms, that they are empowered to use, or not use.

Tyrion 🐧🏴‍☠️

@jon

The pedantic in me notes that "algorithms" can be a bit vague (after all, old/new sort is an algorithm 🤓), but I do understand you mean the more modern, AI-infused ones, such as engagement, allowed narratives, sentiment (positive/negative, happy/angry), whether bigger/verified/famous accounts interact with the piece of content, etc. provided that the content itself is not hate speech or otherwise illegal.

One of the reasons I like Mastodon over lemmy/reddit is that here we do not have the negative interaction (downvotes) and therefore do not have a negative algorithm that plunges things to the bottom or auto-hides them. If someone is merely saying something like "I like drinking coffee with hotdog water" we can safely ignore them. If we feel strongly, we can reply, and that in itself can mitigate some negativity, as opposed to a downvote and making the person feel bad for their personal tastes. 🌭☕️

@jon

The pedantic in me notes that "algorithms" can be a bit vague (after all, old/new sort is an algorithm 🤓), but I do understand you mean the more modern, AI-infused ones, such as engagement, allowed narratives, sentiment (positive/negative, happy/angry), whether bigger/verified/famous accounts interact with the piece of content, etc. provided that the content itself is not hate speech or otherwise illegal.

Dave Rahardja (he/him)

@jon Depends on the algorithm.

Twitter used to show linear timelines. IIRC at one point they moved to a hybrid model where an algorithm would boost interesting tweets in your timeline that you were likely to miss because they were too far down. This kind of algorithm can actually lift marginalized voices that would otherwise be buried in the traffic.

But any algorithm that optimizes for “engagement” is likely to boost incendiary or plain false content because they get a lot of replies. Unless it’s paired with a content moderation system that stamps out intolerant speech—and it never is—such intolerance will get boosted for engagement.

@jon Depends on the algorithm.

Twitter used to show linear timelines. IIRC at one point they moved to a hybrid model where an algorithm would boost interesting tweets in your timeline that you were likely to miss because they were too far down. This kind of algorithm can actually lift marginalized voices that would otherwise be buried in the traffic.

Madeleine Morris

@drahardja @jon Thank you so much for this very accessible explanation.

Honestly, I really like Mastodon the way it is now. It works for me.

Go Up