I'm not saying LLMs are magic and can do all the things they promise to investors, I'm saying these companies don't care about whether the bots can think. they won't work and that's worse: what they certainly will do is deepen the logic of surveillance that drives their application in advertising and provide a lot of flimsy, bias ridden, nonfunctional LLMs as platforms to data consumers like governments, cops, and insurance companies to make use of surveillance data under the cloak of LLM datawashing.
for the one billionth time, anti-capitalism is the excluded viewpoint in academia.
why are we talking about how to detect students cheating and not why our neoliberal academies bankrupt our pedagogy by turning our classrooms into grade mills.
why are we talking about whether the #LLMs can think and not how they exploit the ideology of convenience they have trained us with so every time we want to know something the energy of a small bomb is expended in generating an answer, computing an ad profile, and nurturing dependence to choke off independent access to information at scale.
for the one billionth time, anti-capitalism is the excluded viewpoint in academia.
why are we talking about how to detect students cheating and not why our neoliberal academies bankrupt our pedagogy by turning our classrooms into grade mills.
why are we talking about whether the #LLMs can think and not how they exploit the ideology of convenience they have trained us with so every time we want to know something the energy of a small bomb is expended in generating an answer, computing an ad profile,...