In practice, requiring human oversight of automated decision making doesn't correct for bias or errors -- people tend to defer to the automated system. Ben Green's excellent paper on this focuses on government use of automated systems, but the dynamic applies more generally. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3921216
First, evidence suggests that people are unable to perform the desired oversight functions. Second, as a result of the first flaw, human oversight policies legitimize government uses of faulty and controversial algorithms without addressing the fundamental issues with these tools.And sure, as you point out, mistakes are made today by human moderators ... but those mistakes contaminate any training set. And algorithms typically magnify biases in the underlying data.
@Raccoon@techhub.social @mekkaokereke@hachyderm.io
@jdp23 @mekkaokereke
Oh no, if I at any point suggested that I thought that an AI can be a better moderator than a human then I have written it poorly. No machine should ever be responsible for a management decision because a machine can't be held accountable.
Humans are definitely the better choice for moderation decisions.
This is a good point about the oversight problem though: with a system that just flags certain words or combinations thereof, it's easy for people to understand, internally, that these posts might not be bad. With a system that's doing some complicated thing that we don't understand beneath the surface, it's going to be a bit harder to make that connection.
And once again, this is a case of the system not really justifying itself: how much will it actually catch that isn't caught by simpler systems, and does that outweigh the real potential for poor oversight of a system with bad biases?
@jdp23 @mekkaokereke
Oh no, if I at any point suggested that I thought that an AI can be a better moderator than a human then I have written it poorly. No machine should ever be responsible for a management decision because a machine can't be held accountable.
Humans are definitely the better choice for moderation decisions.