Email or username:

Password:

Forgot your password?
Top-level
Erik Jonker

@resuna @david_chisnall ofcourse but current AI models can provide a level of education that scales easily, it will supplement humans in their roles and sometimes replace them. Current models can perfectly help students with high school math, even with some ambiguity

13 comments
Resuna

@ErikJonker @david_chisnall

The software that people refer to as "AI" is nothing more than a parody generator, and is really really bad at dealing with ambiguity. It's a joke. If you actually think that it is capable of understanding, then it has been gaslighting you.

Erik Jonker

@resuna @david_chisnall I actually know how these models work, it's not about intelligence and understanding, they are just tools but very good ones in my own experience

Erik Jonker

@resuna @david_chisnall ...if you tried GPT-4o or a tool like NotebookLM then you know they are more then parody generators, it doesn't help denying the capabilities of these technologies especially because there are real risks/dangers with regard to their use

Cluster Fcku

@ErikJonker @resuna @david_chisnall now take your comments, and substitute it like this: "I find English and German very useful for work. It doesn't help denying the capabilities of *natural languages* especially because there are real risks/dangers with regard to their use". At times language appears as outer thought, but do not use it is as decisive thought. As a centralized source for inquiry and digestion, LLMs are far more dangerously illusive than the natural languages by billions.

Resuna

@ErikJonker @david_chisnall

They purely operate on text patterns, they do not reason, they do not build models, they just glue tokens together. There is nothing in their design to do any more than that. This is an inherent feature of any example of this class of programs.

naught101

@ErikJonker @resuna @david_chisnall

Huh? Perfectly?

There have been multiple instances of people showing LLMs getting answers wrong to the most basic arithmetic problems. That's not a bug, it's an inherent feature of the model, which draws meaning from language only and has no concept of maths.

That incorrectness can only get more likely as math problems get more complex. And the more complex it gets, the harder it is for humans to detect the errors.

How is that perfect for education?

Erik Jonker

@naught101 @resuna @david_chisnall as a support tooll during homework, where it can give additional explanation, I see a bright future for the current best models (for highschool level assignments) , for text based tasks they are even better (not strange for LLMs) . Ofcourse people have to learn to check and not fully trust, at the same time there is a lot of added value. It's my personal/micro observation but i see it confirmed in various papers

RAOF

@ErikJonker @naught101 @resuna @david_chisnall

Of course people have to learn to check and not fully trust,

This is what makes them particularly ill-suited for educational tasks. A large part of education on a subject is developing the ability to check, to have an intuition for what is plausible.

Erik Jonker

@RAOF @naught101 @resuna @david_chisnall true, but you can adapt and fine tune models for that purpose

RAOF replied to Erik

@ErikJonker @naught101 @resuna @david_chisnall can you? How? Is there an example of this that you have in mind, or is this more a “surely things will improve” belief?

What is the mechanism that bridges “output the token statistically most-likely to follow the preceding tokens” and “output the answer to the student's question”?

RAOF replied to RAOF

@ErikJonker @naught101 @resuna @david_chisnall also, isn't the task you're suggesting is possible just equivalent to “make an LLM you don't need to check the results of”?

Dmitri Ravinoff

@ErikJonker
I read this a lot (help in learning context) but it doesn't gel with my learning experience: Someone or something that "gives explanation" but at the same time "can't be trusted" (not to tell bullshit) is totally useless in this context. Or not?
@naught101 @resuna @david_chisnall

violetmadder

@ErikJonker @naught101 @resuna @david_chisnall

The support students need for their work, is A HUMAN BEING WHO IS GOOD AT UNDERSTANDING AND EXPLAINING THINGS.

A good teacher/tutor/sibling/etc can break down an explanation and present it in different ways tailored to the student's understanding. They can look at a student's work and even if it's incorrect, see what the student's train of thought was and understand what they were trying to do.

Our society already drastically undervalues that crucial, mind-accelerating work-- arguably the most important of all human endeavors, as everything else relies on it.

Glorified stochastic parrots spewing botshit are no damned substitute.

@ErikJonker @naught101 @resuna @david_chisnall

The support students need for their work, is A HUMAN BEING WHO IS GOOD AT UNDERSTANDING AND EXPLAINING THINGS.

A good teacher/tutor/sibling/etc can break down an explanation and present it in different ways tailored to the student's understanding. They can look at a student's work and even if it's incorrect, see what the student's train of thought was and understand what they were trying to do.

Go Up