Email or username:

Password:

Forgot your password?
Top-level
Bill Plein🌶

@evan @Gargron 2/2

The current LLMs are literally statistical models distilled down to map a vast amount of training data into a very small amount of code (with embedded words) that meet the goals of the humans that created it.

When an LLM “learns” from a conversation, it’s just adding new words (rasterized in their context) to the heap. It doesn’t change the mapping/model. That’s been hard coded by the humans who developed the model.

1 comment
Bill Plein🌶

@evan @Gargron 3/2 (I lied)

Summarized, ChatGPT-x doesn’t get to ChatGPT-(x+1) without the humans learning, applying that knowledge to a new model, training that model by burning down a forest or 3, and then publishing the new distillation.

I could be wrong but that’s my understanding.

Go Up