Honestly, the most alarming thing about AI isn't so much about AI itself, but about how utterly hell-bent humans are to use it for things that it does a bad job at. H sapiens is bound and determined to use this chisel as a screwdriver.
Case in point: the recent news story about a lawyer (or pair of lawyers - finger pointing is underway) who submitted a filing in federal court which had actually been written by ChatGPT.
What is the one thing one can assume everyone has heard about LLMs? That they make completely bogus shit up, including inventing nonexistent citations.
It would be hard to overstate how unacceptable to a court it is for a lawyer to submit a legal argument which cites nonexistent case law. That's the kind of shit that can get a lawyer disbarred. It's a, uh, *career-limiting* move.
But apparently some lawyer actually did it: he took the output of a computer program famous for fabricating false citations and piped it directly into a court.
1/?
This is working out about as well as one would expect.
Now, in this case, the fool is hoist by his own petard, but, alas, the general tendency to use AI for things it is bad at is already racking up examples of it hurting third parties. The really obvious example of this is using AI in policing, where racist policing practice provides the training data for predictive models that successfully capture racism in the model.
I have the strong suspicion that there's actually a whole bunch of undetected examples in the wild, already fucking things up for everyone.
For instance...
2/?
This is working out about as well as one would expect.
Now, in this case, the fool is hoist by his own petard, but, alas, the general tendency to use AI for things it is bad at is already racking up examples of it hurting third parties. The really obvious example of this is using AI in policing, where racist policing practice provides the training data for predictive models that successfully capture racism in the model.