@lritter@mcc I more mean the conflation of what fiction says an AI does (think and produce an answer) with what an LLM actually does (produce an answer-shaped blurb). Sure, the marketing also primes people to believe ChatGPT is a thinking machine, but a shocking number of people who have actually used it and seen the ways in which it is typically wrong still seem to think it has any intelligence at all. They say it “hallucinated” citations in a legal brief. No, it produced a legal-brief-shaped blob of text. Briefs have citations. The citations may happen to align with reality or they may not, but they’re not hallucinated.
Words are textually meaningless to LLMs, like they are to right-wing politicians. Only the associations between the words matter.
@lritter@mcc I more mean the conflation of what fiction says an AI does (think and produce an answer) with what an LLM actually does (produce an answer-shaped blurb). Sure, the marketing also primes people to believe ChatGPT is a thinking machine, but a shocking number of people who have actually used it and seen the ways in which it is typically wrong still seem to think it has any intelligence at all. They say it “hallucinated” citations in a legal brief. No, it produced a legal-brief-shaped blob...
@lritter @mcc I more mean the conflation of what fiction says an AI does (think and produce an answer) with what an LLM actually does (produce an answer-shaped blurb). Sure, the marketing also primes people to believe ChatGPT is a thinking machine, but a shocking number of people who have actually used it and seen the ways in which it is typically wrong still seem to think it has any intelligence at all. They say it “hallucinated” citations in a legal brief. No, it produced a legal-brief-shaped blob of text. Briefs have citations. The citations may happen to align with reality or they may not, but they’re not hallucinated.
Words are textually meaningless to LLMs, like they are to right-wing politicians. Only the associations between the words matter.
@lritter @mcc I more mean the conflation of what fiction says an AI does (think and produce an answer) with what an LLM actually does (produce an answer-shaped blurb). Sure, the marketing also primes people to believe ChatGPT is a thinking machine, but a shocking number of people who have actually used it and seen the ways in which it is typically wrong still seem to think it has any intelligence at all. They say it “hallucinated” citations in a legal brief. No, it produced a legal-brief-shaped blob...