Was talking to someone about #BlueSky the other day, and how they apparently used some sort of #AI for #moderation. It occurs to me that someone could eventually build some sort of tool for moderation on #Mastodon and the greater #Fediverse, which could use similar AI to flag posts it thought were inappropriate, thus taking some load off of #moderators. It also occurs to me that this may actually become necessary, as spammers and other bad actors get access to more powerful AI, which can do large-scale attacks on our social network.
I don't think I am going to try this anytime soon, but it occurs to me that someone could relatively easily put together an open source AI system for flagging posts.
If we could set aside the concerns about power consumption / environmentalism that come with large-scale fast #LLM systems, how would people on #Fedi feel about AI being used as a tool for moderation?
Please respond to and #boost this #poll, and post any thoughts about it in the comments!
Was talking to someone about #BlueSky the other day, and how they apparently used some sort of #AI for #moderation. It occurs to me that someone could eventually build some sort of tool for moderation on #Mastodon and the greater #Fediverse, which could use similar AI to flag posts it thought were inappropriate, thus taking some load off of #moderators. It also occurs to me that this may actually become necessary, as spammers and other bad actors get access to more powerful AI, which can do large-scale...
This is the kind of thing that sounds like a good idea to people that don't talk to enough Black people in tech. 🤷🏿♂️
The paradox of almost every ML based moderation system in existence:
* Black women receive the most abuse online
* ML systems disproportionately false positive statements by Black women, and disproportionately false negative abuse against Black women
Similarly, facial recognition systems most used against Black folk, get the most false positives on Black folk. 🤷🏿♂️
This is the kind of thing that sounds like a good idea to people that don't talk to enough Black people in tech. 🤷🏿♂️
The paradox of almost every ML based moderation system in existence:
* Black women receive the most abuse online
* ML systems disproportionately false positive statements by Black women, and disproportionately false negative abuse against Black women
@Raccoon
This is the kind of thing that sounds like a good idea to people that don't talk to enough Black people in tech. 🤷🏿♂️
The paradox of almost every ML based moderation system in existence:
* Black women receive the most abuse online
* ML systems disproportionately false positive statements by Black women, and disproportionately false negative abuse against Black women
Similarly, facial recognition systems most used against Black folk, get the most false positives on Black folk. 🤷🏿♂️
1/N
@Raccoon
This is the kind of thing that sounds like a good idea to people that don't talk to enough Black people in tech. 🤷🏿♂️
The paradox of almost every ML based moderation system in existence:
* Black women receive the most abuse online
* ML systems disproportionately false positive statements by Black women, and disproportionately false negative abuse against Black women