This is working out about as well as one would expect.
Now, in this case, the fool is hoist by his own petard, but, alas, the general tendency to use AI for things it is bad at is already racking up examples of it hurting third parties. The really obvious example of this is using AI in policing, where racist policing practice provides the training data for predictive models that successfully capture racism in the model.
I have the strong suspicion that there's actually a whole bunch of undetected examples in the wild, already fucking things up for everyone.
For instance...
2/?
Most corporations these days use some sort of enterprise-class web-based software for handling job applications, and most or all of them offer some sort of applicant filtering, based on keywords or some such.
I'm pretty confident that at least some major vendors' products don't work right. That's based on two things.
First, at least in IT, the problem of highly qualified and desirable applicants being mysteriously filtered out by the software is so well known, that it's normal advice that job seekers should try to use social contacts within the org to make an end run around the filters.
Second, I've debugged software.
3/?
Most corporations these days use some sort of enterprise-class web-based software for handling job applications, and most or all of them offer some sort of applicant filtering, based on keywords or some such.
I'm pretty confident that at least some major vendors' products don't work right. That's based on two things.