The #LLMs aren't just weird text generators, and when these companies talk to investors they don't talk about whether they're sentient or not. They talk about "understanding intent" as a synonym for matching search queries to ads. They're parsing your email and calendar and docs and matching them to entities in their knowledge graph to predict your likelihood of clicking an ad. They don't talk about generated text as thought, it's to optimize ad content and give better clickthrough rates to advertisers who pay to embed in the answers of "LLM-type experiences"
https://abc.xyz/investor/static/pdf/2022_Q4_Earnings_Transcript.pdf
I'm not saying LLMs are magic and can do all the things they promise to investors, I'm saying these companies don't care about whether the bots can think. they won't work and that's worse: what they certainly will do is deepen the logic of surveillance that drives their application in advertising and provide a lot of flimsy, bias ridden, nonfunctional LLMs as platforms to data consumers like governments, cops, and insurance companies to make use of surveillance data under the cloak of LLM datawashing.