It's almost like the evidence keeps pointing out the truth.
I mean, rather than looking at the *process* of creating #LLMs being broken and racist, maybe we should accept that *our society is broken and racist*. I mean, how can we hope to make an #AI that isn't racist when all examples it is fed are inherently racist?
It's like trying to make a gourmet meal out of sewage by straining out the shit.
https://www.nature.com/articles/d41586-024-00779-1
@Okanogen We had the same issues - the exact same issues! - with less sophisticated algorithms a few years ago, because they were being fed the same data.