Email or username:

Password:

Forgot your password?
Top-level
Jon

Agreed that simpler tools that are easier for people to understand the limits of might be less prone to the oversight problems. I talked once with an r/AskHistorians moderator about how tools fit into their intersectional moderation approach, and they told me that they used some very simple pattern-matching tools to improve efficiency ... stuff like that can be quite useful, if everybody understands the limitations and processes make sure there isn't too much reliance on the tools.

But that's a strong argument against *AI-based* systems!

Of course, a different way to look at it is that there's an opportunity to start from scratch, build a good training set and algorithms on top of it that focus on explainability and being used as a tool to help moderators (rather than a magic bullet). There are some great AI researchers and content moderation experts here who really do understand the issues and limitations of today's systems. But, it's a research project, not something that's deployable today.

@Raccoon@techhub.social @mekkaokereke@hachyderm.io

1 comment
Jon

Also, related to your question of how much AI-based moderation would actually help, there's an important point in the "Moderation: Key Observations" section of the Governance on Fediverse Microblogging Servers that @darius@friend.camp and @kissane@mas.to just published:

A lot of Fediverse moderation work is relatively trivial for experienced server teams. This includes dealing with spam, obvious rulebreaking (trolls, hate servers), and reports that aren’t by or about people actually on a given server. For some kinds of servers and for certain higher-profile or high-intensity members on other kinds of servers, moderators also receive a high volume of reports about member behaviors (like nudity or frank discussion of heated topics) that their server either explicitly or implicitly allows, and which the moderators therefore close without actioning.

These kinds of reports are the cleanest targets for tooling upgrades and shared/coalitional moderation, but it’s also worth noting that except in special circumstances (like a spam wave or a sudden reduction in available moderators), this is not usually the part of moderation work that produces intense stress for the teams we interviewed. (This is one of the findings that we believe does not necessarily generalize across other small and medium-sized servers.)


@Raccoon@techhub.social @mekkaokereke@hachyderm.io

Also, related to your question of how much AI-based moderation would actually help, there's an important point in the "Moderation: Key Observations" section of the Governance on Fediverse Microblogging Servers that @darius@friend.camp and @kissane@mas.to just published:

A lot of Fediverse moderation work is relatively trivial for experienced server teams. This includes dealing with spam, obvious rulebreaking (trolls, hate servers), and reports that aren’t by or about people actually on a given server....
Go Up