@jonny It is indeed reasonable! But somehow the illusion is outweighing the facts about how LLMs work internally.
I've been concerned that much of the debunking is also targetted not at the LLM, but at the anthromorphism – I have colleagues who warn about how it "hallucinates" and "fabricates" and "lies", for instance. But it's not capable of any of that in the usual meanings of those words. And thus I worry that their language choices are making the problem worse.
@libroraptor
I asked them about this! They do indeed see it as a tool, and thought it was doing a semantic code analysis, not running it per se, but something like static code analysis. Which again is I think reasonable because their IDE was showing tooltips with the value of the variable, at least the initial assignment, so why wouldnt the chatbot be able to do that?