@beeoproblem@jonny sorry, you don't seem to have understood what I'm saying. this isn't a situation where an LLM is acting on language alone and manipulating it to sound like it's producing human utterances; it's a situation where an LLM is being handed very carefully-controlled data that has been processed elsewhere and is simply being asked to express that in a more human-friendly way. so in this case rather than trying to convolute code itself, the LLM would instead pass it off to some sort of specially-trained neural network designed to, say, perform and analyze bitwise operations, then the LLM would present the results and explain what that network did
difficult to understand without the diagrams, i guess
also advice for the future: don't come into people's replies and condescend to them. that's rude.
@beeoproblem@jonny sorry, you don't seem to have understood what I'm saying. this isn't a situation where an LLM is acting on language alone and manipulating it to sound like it's producing human utterances; it's a situation where an LLM is being handed very carefully-controlled data that has been processed elsewhere and is simply being asked to express that in a more human-friendly way. so in this case rather than trying to convolute code itself, the LLM would instead pass it off to some sort of...
@beeoproblem @jonny sorry, you don't seem to have understood what I'm saying. this isn't a situation where an LLM is acting on language alone and manipulating it to sound like it's producing human utterances; it's a situation where an LLM is being handed very carefully-controlled data that has been processed elsewhere and is simply being asked to express that in a more human-friendly way. so in this case rather than trying to convolute code itself, the LLM would instead pass it off to some sort of specially-trained neural network designed to, say, perform and analyze bitwise operations, then the LLM would present the results and explain what that network did
difficult to understand without the diagrams, i guess
also advice for the future: don't come into people's replies and condescend to them. that's rude.
@beeoproblem @jonny sorry, you don't seem to have understood what I'm saying. this isn't a situation where an LLM is acting on language alone and manipulating it to sound like it's producing human utterances; it's a situation where an LLM is being handed very carefully-controlled data that has been processed elsewhere and is simply being asked to express that in a more human-friendly way. so in this case rather than trying to convolute code itself, the LLM would instead pass it off to some sort of...