Email or username:

Password:

Forgot your password?
Top-level
Dr. Sbaitso

@Gargron Yup. They're not intelligence. They're not hallucinating. They're not making new connections.

They're data-center scale autocorrect.

13 comments
Marsh Ray

@drsbaitso @Gargron Awhile back, I asked GPT-4 to come up with a meme in response to this viewpoint

Raphi

@drsbaitso @Gargron Data Center auto correct is such a good fit, and is exactly my experience. Chatbots are so unreliable you have to check each and every answer, and that makes using them entirely pointless.

Lord Caramac the Clueless, KSC

@GMRaphi @drsbaitso @Gargron I wouldn't say chatbots are useless per se, but people haven't really learned how to use those tools yet. Think of the possibilities for gaming! We used to have background NPCs which looked the same and repeated the same lines, over and over. Then we had NPCs that didn't exactly look the same, but were obviously made from a limited number of modules mixed together by a random number generator, and what they said was also a random pattern generated from text snippets. They didn't look exactly the same anymore, they didn't sound exactly the same anymore, they just looked and sounded roughly the same with slight differences.
Now we can have characters automatically generated from a rough description, and they can talk about whatever part of the game lore they have been fed during their ML training. The LLMs for NPCs don't have to be very big, they can use the local GPU or NPU while the game is running, and the 3D model generator and the 2D texture generator both only need to run once for every NPC. Instead of training those models on huge volumes of data found on the Internet, you train them on carefully selected data sets. Also, NVidia makes use of diffusion-based moving picture synthesis to upscale computer games in realtime, using traditional shading and rendering techniques to create only low resolution, low detail, graphics, and then using the diffusion model to scale it up to 4k with added details.
Our own brains do something like that all the time. Our eyes aren't nearly as good as most of us think they are--we can only perceive colour in the centre of our field of view, and the resolution is also highest in the centre. Also, the optical information gets preprocessed with a high and quite lossy compression before it enters the optical nerve. If you think of what comes form your eyes as a drawing, you get the very sharp coloured pencils in a small frame in the middle of the image, then the pencils become blunt as you move further out from there, eventually fading out, replaced by very soft and blunt regular black pencils with very low detail in most of the picture. Our brains just extrapolate from that and make the entire mental picture look like the centre.
And our brains often do similar things as a chatbot when we create sentences, it's just that it does a lot more than just that. Vision and language are two interfaces our brains use to interact with the world, but our minds are more than just clouds of images and language.
With our current artificial neural networks (basically just billion-dimensional tensors, it's all just Linear Algebra on steroids) we use far too much energy and resources per "neuron", we won't get anything even close to a human mind that way before our entire global human civilisation cannot afford it anymore. However, I think an artificial mind on a human level is possible in principle, there are other, more compact, more energy and material efficient methods of computation, which aren't as precise, but high precision is not needed at all for neural networks, in fact, they work better if they're not all that precise. Therefore, we might see a renaissance of analogue computing in AI research.

@GMRaphi @drsbaitso @Gargron I wouldn't say chatbots are useless per se, but people haven't really learned how to use those tools yet. Think of the possibilities for gaming! We used to have background NPCs which looked the same and repeated the same lines, over and over. Then we had NPCs that didn't exactly look the same, but were obviously made from a limited number of modules mixed together by a random number generator, and what they said was also a random pattern generated from text snippets....

Sicko

@drsbaitso @Gargron Not at all, language models build an internal world model and understand what they're talking about.

Sicko

@drsbaitso If you want to talk with adults you're going to need actual arguments instead of "lalala I can't hear you"

Dr. Sbaitso

@sicko I offered exactly as much detail as you did, more succinctly. That means I win.

Dr. Sbaitso

@sicko Hey, you should be trying to argue with me, not a mirror.

Dr. Sbaitso

@sicko You're probably right that the racism was a bad choice. But don't think I missed you using the n word to conclusively lose the argument.

Sicko

@drsbaitso And racism is always a top tier choice

Go Up