Email or username:

Password:

Forgot your password?
Top-level
When's That

@bontchev Reason #872 that "AI" is a pile of crap and I don't want anything to do with it.

3 comments
VessOnSecurity

@whensthat That's a bit harsh. AI, as a field, is huge. The current hype is about a very, very narrow part of it - the so-called generative large language models.

Despite sounding very human, they are not intelligent, do not understand what they are saying, cannot reason, and have no beliefs or convictions. They have just a huge matrix of probabilities for words and phrases and output what is most likely to follow the prompt given by the user.

The best analogy I can think of is a parrot with a huge dictionary. It can sound very human and sometimes even simulate conversation, but it is not sapient.

@whensthat That's a bit harsh. AI, as a field, is huge. The current hype is about a very, very narrow part of it - the so-called generative large language models.

Despite sounding very human, they are not intelligent, do not understand what they are saying, cannot reason, and have no beliefs or convictions. They have just a huge matrix of probabilities for words and phrases and output what is most likely to follow the prompt given by the user.

When's That

@bontchev No one's calling them LLMs, though. There's a difference.

Even that OpenAI Sora video thing is trash. They used videos it had created to show off how good it was, but the lady walking down the street swapped legs while walking, and the cat on the bed grew an extra left leg!

Does no one at these companies look at what they're creating and think, "That's clearly wrong; we should fix it before we release it"? Or do they just release it and expect we won't notice it's crap?

VessOnSecurity

@whensthat It's not easy to fix. In fact, it might even be impossible; we just don't know for sure yet.

My original AI background (a lifetime ago) was in expert systems - another sub-field of AI. They are very different. There you have a human programmer talk to a bunch of human experts and try to extract their expert knowledge and codify it as IF/THEN/ELSE rules. The expert system has a huge database of such rules and an "inference engine" that processes them.

Long story short, when an expert system tells you something, you can ask it two important questions - HOW and WHY - i.e., how did you reach this conclusion and why do you think so - and it will explain itself, by showing which rules in its knowledge base have fired and in what order. Then, if the answer is wrong, you can "fix" the rules.

Not so with the generative models (they are based on neural networks, BTW). You give them a humongous amount of data and they somehow learn to recognize things - like how to differentiate a dog from a cat, or what words are most likely to follow a request to tell a joke. But they cannot explain how they have reached their conclusion and you don't know how to fix them, if there is a problem.

So, neural networks are much easier to make than expert systems (making them is computationally expensive but requires very little human effort) but they often generate wrong bullshit and you have no idea how to fix them.

@whensthat It's not easy to fix. In fact, it might even be impossible; we just don't know for sure yet.

My original AI background (a lifetime ago) was in expert systems - another sub-field of AI. They are very different. There you have a human programmer talk to a bunch of human experts and try to extract their expert knowledge and codify it as IF/THEN/ELSE rules. The expert system has a huge database of such rules and an "inference engine" that processes them.

Go Up