She dutifully gave the talk on the project as is, complete with the rug pull at the end: “Here’s our results! They’re so broken! Look, it learned the bias in our dataset! Surprise!“ It got an audible reaction from the audience. People •loved• her talk.
I wish there had been some HR folks at her talk.
Train an AI on your discriminatory hiring practices, and guess what it learns? That should be a rhetorical question, but I’ll spell it out: it learns how to infer the gender of applicants.
5/
An interesting angle I’m sure someone is studying properly: when we feed these tabula rasa ML systems a bunch of data about the world as it is, and they come back puking out patterns of discrimination; can that serve as •evidence of bias• not just in AI, but in •society itself•?
If training an ML system on a company’s past hiring decision makes it think that baseball > softball for an office job, isn’t that compelling evidence of hiring discrimination?
6/