@derwinmcgeary problem is that the training input is subconsciously biassed and prejudiced. Most “diversity training” is about getting people to become aware of their subconscious prejudices and then (step two) to act to resist them. I don’t think this is possible with an AI trained on subconsciously prejudiced data. But I may be wrong...
@nicbest this was really just a dunk on Google thinking they could just add the word "diverse" into queries as a fix for all of that biased input, but on a deeper level, the biased output of centuries of both systematic and unconscious discrimination isn't just the training data for some gizmos: we are living in it, and there is no trivial fix.
Having said that, someone did mention having a second bias-spotting AI in the training process as something that people do, which might get you somewhere, although I can't help imagining a room-sized compute cluster rolling its eyes at being sent to mandatory diversity training.
@nicbest this was really just a dunk on Google thinking they could just add the word "diverse" into queries as a fix for all of that biased input, but on a deeper level, the biased output of centuries of both systematic and unconscious discrimination isn't just the training data for some gizmos: we are living in it, and there is no trivial fix.