@TorbjornBjorkman @brianonbarrington @Wolven How else should an AI/algorithm be trained? If not on existing datasets (the larger the better) then how?
The cost of orienting a tool like this to an otherwise human task would likely be enormous, no? And also likely wouldn’t solve the problem.
The discrepancy is knowledge/ understanding of human bias which requires awareness/acknowledgement of bias.
How easy is it to assemble a team of engineers who are versed in such things? A team skilled in countering such things might well require fundamentally diverse background and experience but how does that kind of approach square with a typical management team or, indeed the culture more broadly.
Seems a lot like a paradigmatic shift is required.
@TorbjornBjorkman @brianonbarrington @Wolven Or maybe some kind of audit process would work (clearly I have no expertise whatever).
It still seems a lot like the problem in most cases is acknowledging there’s a problem at all.