Email or username:

Password:

Forgot your password?
Top-level
Alistair K

@jonny It is indeed reasonable! But somehow the illusion is outweighing the facts about how LLMs work internally.

I've been concerned that much of the debunking is also targetted not at the LLM, but at the anthromorphism – I have colleagues who warn about how it "hallucinates" and "fabricates" and "lies", for instance. But it's not capable of any of that in the usual meanings of those words. And thus I worry that their language choices are making the problem worse.

4 comments
Dr. jonny phd

@libroraptor
I asked them about this! They do indeed see it as a tool, and thought it was doing a semantic code analysis, not running it per se, but something like static code analysis. Which again is I think reasonable because their IDE was showing tooltips with the value of the variable, at least the initial assignment, so why wouldnt the chatbot be able to do that?

Alistair K

@jonny I think that it's a brilliant tool. (And my colleagues do not like me to say this.)

But what does your programmer think LLMs do?

I offered a different conceptualisation to my colleagues by giving them Markov chains to play with, but they seemed to think even random prose generators were still creative, thinking agents, albeit of a less intelligent form.

I've been finding also that hardly anyone who complains about AI knows what a huge class of things it is. Language is troubling.

David Gerard

@libroraptor @jonny the term "artificial intelligence" has been marketing jargon since it was coined in 1954, it's never referred to any specific technology - it's selling the dream of your plastic pal who's fun to be with, especially when you don't have to pay him

Go Up