@walruslifestyle
Thats what it looks like its doing!
6 comments
@walruslifestyle @walruslifestyle @jonny @walruslifestyle The implied assumption is that humans with their very limited ability to memorize answers would have to understand code in order to arrive at the correct answer. We apply that same assumption to LLMs at our own peril. Surely it couldn't have simply memorized all the answers and is simply applying pattern matching to generate the answer, right? @jonny @walruslifestyle Right, the ultimate problem here is one of misleading marketing. LLMs actually have a bunch of really useful applications, but those applications, I guess, are not the ones that the companies developing them thought would sell. |
@jonny OK but you don't know that till after you've observed it awhile and i guess what I'm wondering is where one would get the idea in the first place? from someone else? from the internet? there's a stunningly wide gap in understanding if a person thinks chatgpt is evaluating code, both of chatgpt and of coding! how does that happen to a person? I'm fully failing to understand π