Email or username:

Password:

Forgot your password?
Top-level
jonny (good kind)

@walruslifestyle
Their mental model is that "I can talk to this thing, when I give it some code it knows what that code is and can tell me about it in the same way that it seems to tell me about lots of things," and they are not so naïve in my opinion because products like copilot do advertise themselves as understanding code, so thinking the LLM is actually parsing it and reasoning about it rather than generating plausible text from some seed vector in its latent space is reasonable enough to me.

4 comments
jonny (good kind)

@walruslifestyle
I dont disagree if u know a little bit about how these things work its ridiculous, but he is just following everything he's been told about what they can do!

Bornach

@jonny @walruslifestyle
Especially given the flood of YouTube videos demonstrating ChatGPT solving coding problems in minutes
youtu.be/jwpja9fcqaM

The implied assumption is that humans with their very limited ability to memorize answers would have to understand code in order to arrive at the correct answer. We apply that same assumption to LLMs at our own peril. Surely it couldn't have simply memorized all the answers and is simply applying pattern matching to generate the answer, right?

shrimp eating mammal 🦐

@jonny I'm glad you helped them see what to do instead!

Captain Janegay 🫖

@jonny @walruslifestyle Right, the ultimate problem here is one of misleading marketing. LLMs actually have a bunch of really useful applications, but those applications, I guess, are not the ones that the companies developing them thought would sell.

Go Up