@walruslifestyle
Their mental model is that "I can talk to this thing, when I give it some code it knows what that code is and can tell me about it in the same way that it seems to tell me about lots of things," and they are not so naïve in my opinion because products like copilot do advertise themselves as understanding code, so thinking the LLM is actually parsing it and reasoning about it rather than generating plausible text from some seed vector in its latent space is reasonable enough to me.
@walruslifestyle
I dont disagree if u know a little bit about how these things work its ridiculous, but he is just following everything he's been told about what they can do!