It's almost like the evidence keeps pointing out the truth.
I mean, rather than looking at the *process* of creating #LLMs being broken and racist, maybe we should accept that *our society is broken and racist*. I mean, how can we hope to make an #AI that isn't racist when all examples it is fed are inherently racist?
It's like trying to make a gourmet meal out of sewage by straining out the shit.
https://www.nature.com/articles/d41586-024-00779-1
This is actually really important. Do we even have any business at all creating #AI or #LLMs when their instructive database is inherently racist? Everything that comes out we can expect will either be blatantly racist or at minimum we can't trust is not based on racist assumptions.
@Okanogen No imperial languages for LLM training anymore!
@Okanogen in principle we should be able to make an AI that isn’t racist in a similar way to making a person that isn’t racist - spending lots of time carefully curating and constructing inputs rather than just accepting everything they come across from our surrounding society.
But that’s antithetical to their goal of not doing any work