Email or username:

Password:

Forgot your password?
Top-level
Erik Jonker

@resuna @david_chisnall cobol is not a monstrosity as a programming language, it’s ofcourse legacy

27 comments
Erik Jonker

@resuna @david_chisnall …a natural language interface for a computer can have enormous benefits, a good example is an educational context, you can interact, ask question’s etc in a way not possible before

Resuna

@ErikJonker @david_chisnall

Even communicating with other humans in natural language leads to confusion, and humans are much much better at dealing with ambiguity than any computer.

Erik Jonker

@resuna @david_chisnall ofcourse but current AI models can provide a level of education that scales easily, it will supplement humans in their roles and sometimes replace them. Current models can perfectly help students with high school math, even with some ambiguity

Resuna

@ErikJonker @david_chisnall

The software that people refer to as "AI" is nothing more than a parody generator, and is really really bad at dealing with ambiguity. It's a joke. If you actually think that it is capable of understanding, then it has been gaslighting you.

Erik Jonker

@resuna @david_chisnall I actually know how these models work, it's not about intelligence and understanding, they are just tools but very good ones in my own experience

Erik Jonker

@resuna @david_chisnall ...if you tried GPT-4o or a tool like NotebookLM then you know they are more then parody generators, it doesn't help denying the capabilities of these technologies especially because there are real risks/dangers with regard to their use

Cluster Fcku

@ErikJonker @resuna @david_chisnall now take your comments, and substitute it like this: "I find English and German very useful for work. It doesn't help denying the capabilities of *natural languages* especially because there are real risks/dangers with regard to their use". At times language appears as outer thought, but do not use it is as decisive thought. As a centralized source for inquiry and digestion, LLMs are far more dangerously illusive than the natural languages by billions.

Resuna

@ErikJonker @david_chisnall

They purely operate on text patterns, they do not reason, they do not build models, they just glue tokens together. There is nothing in their design to do any more than that. This is an inherent feature of any example of this class of programs.

naught101

@ErikJonker @resuna @david_chisnall

Huh? Perfectly?

There have been multiple instances of people showing LLMs getting answers wrong to the most basic arithmetic problems. That's not a bug, it's an inherent feature of the model, which draws meaning from language only and has no concept of maths.

That incorrectness can only get more likely as math problems get more complex. And the more complex it gets, the harder it is for humans to detect the errors.

How is that perfect for education?

Erik Jonker

@naught101 @resuna @david_chisnall as a support tooll during homework, where it can give additional explanation, I see a bright future for the current best models (for highschool level assignments) , for text based tasks they are even better (not strange for LLMs) . Ofcourse people have to learn to check and not fully trust, at the same time there is a lot of added value. It's my personal/micro observation but i see it confirmed in various papers

RAOF

@ErikJonker @naught101 @resuna @david_chisnall

Of course people have to learn to check and not fully trust,

This is what makes them particularly ill-suited for educational tasks. A large part of education on a subject is developing the ability to check, to have an intuition for what is plausible.

Erik Jonker

@RAOF @naught101 @resuna @david_chisnall true, but you can adapt and fine tune models for that purpose

RAOF replied to Erik

@ErikJonker @naught101 @resuna @david_chisnall can you? How? Is there an example of this that you have in mind, or is this more a “surely things will improve” belief?

What is the mechanism that bridges “output the token statistically most-likely to follow the preceding tokens” and “output the answer to the student's question”?

RAOF replied to RAOF

@ErikJonker @naught101 @resuna @david_chisnall also, isn't the task you're suggesting is possible just equivalent to “make an LLM you don't need to check the results of”?

Dmitri Ravinoff

@ErikJonker
I read this a lot (help in learning context) but it doesn't gel with my learning experience: Someone or something that "gives explanation" but at the same time "can't be trusted" (not to tell bullshit) is totally useless in this context. Or not?
@naught101 @resuna @david_chisnall

violetmadder

@ErikJonker @naught101 @resuna @david_chisnall

The support students need for their work, is A HUMAN BEING WHO IS GOOD AT UNDERSTANDING AND EXPLAINING THINGS.

A good teacher/tutor/sibling/etc can break down an explanation and present it in different ways tailored to the student's understanding. They can look at a student's work and even if it's incorrect, see what the student's train of thought was and understand what they were trying to do.

Our society already drastically undervalues that crucial, mind-accelerating work-- arguably the most important of all human endeavors, as everything else relies on it.

Glorified stochastic parrots spewing botshit are no damned substitute.

@ErikJonker @naught101 @resuna @david_chisnall

The support students need for their work, is A HUMAN BEING WHO IS GOOD AT UNDERSTANDING AND EXPLAINING THINGS.

A good teacher/tutor/sibling/etc can break down an explanation and present it in different ways tailored to the student's understanding. They can look at a student's work and even if it's incorrect, see what the student's train of thought was and understand what they were trying to do.

Jimmy Havok

@ErikJonker @resuna @david_chisnall I deal with people's questions every day, and much of the time they aren't even sure what they are asking for. It takes a good deal of drilling down to get to what they need to know. A lot of what is involved is figuring out what they need to know to find out what they want to know. It's difficult for a human with shared experience, I'm skeptical that an LLM could manage it.

Cogito Ergo Disputo

@jhavok @ErikJonker @resuna @david_chisnall I agree with you. I never understood the expectation that AI would now start coding for us when the biggest problem has always been to define the desired end result in a complete and unambiguous way. And humans suck at doing that, so good luck having AI do it. But that doesn't mean there are no good uses for LLM. It'll depend on the amount of training the LLM goes through and being careful on limiting the scope of what they're used for.

Jimmy Havok

@Disputatore @ErikJonker @resuna @david_chisnall I suspect LLMs will be used to put proofreaders out of work...but not actual editors.

Cogito Ergo Disputo

@jhavok @ErikJonker @resuna @david_chisnall Could be. But there's a bunch of other opportunities. Current chat bots are shit. There's a huge opportunity for improvement there. Doing initial drafts for business proposals is another. And I'm sure there are many others.

Jimmy Havok

@Disputatore @ErikJonker @resuna @david_chisnall I'm curious as to what effect of quantum computing will have on AI. Will it give enough processing speed to match organics? Will the secret sauce emerge out of speed alone, or will something else be needed? Will we even recognize AGI if it happens? Will it behave?

Resuna

@jhavok @Disputatore @ErikJonker @david_chisnall

First: quantum computing isn't magic and its scale may never get to the point where it can be used for anything as complicated as some equivalent to large language models.

Second: the problem isn't speed the problem is the algorithms. I don't think that there is any reason to assume that classical computing cannot be used to solve the problem. The problem is that all the oxygen in the room has been sucked into this dead end technology.

Jimmy Havok replied to Resuna

@resuna @Disputatore @ErikJonker @david_chisnall I see a problem in writing an algorithm for something we don't even understand (AGI). I do agree that LLMs aren't going to do the job, since all they do is replicate grammatical logic, which allows them to fake intelligence rather convincingly without being intelligent. Even human hosted LLMs aren't very smart, that's why we came up with things like formal logic.

Resuna

@ErikJonker @david_chisnall

My first job was programming in COBOL before it was legacy. It is terrible. It always was terrible. It's not a natural language, it's not ambiguous, but trying to make it look like a natural language was an unmitigated disaster at every level. The same is true of Perl's "linguistic" design. Even just pretending to be a natural language spawns monstrosities.

Edit: see also Applescript.

Erik Jonker

@resuna @david_chisnall it is extremely stable and durable for sure, ask any financial institution 😃

jhannafin

@ErikJonker @resuna @david_chisnall OK, but that's not because of COBOL. You could write something durable and stable in any programming language. Financial software is written in COBOL because that was the language of the mainframe at the time. The fact that it's still largely in COBOL is because it's expensive to rewrite, the returns on a rewrite are hard to quantify, and the risks are huge.

Resuna

@ErikJonker @david_chisnall

Have you ever written any code in COBOL? Everything in COBOL takes longer to write, the fundamental operations are simplistic and verbose, the program structure is stilted and restrictive, the way you define data structures is horribly antiquated, and a huge number of the problems that make writing COBOL so slow and painful are due to its mistaken "language like" design.

Go Up