@ErikJonker @resuna @david_chisnall
Huh? Perfectly?
There have been multiple instances of people showing LLMs getting answers wrong to the most basic arithmetic problems. That's not a bug, it's an inherent feature of the model, which draws meaning from language only and has no concept of maths.
That incorrectness can only get more likely as math problems get more complex. And the more complex it gets, the harder it is for humans to detect the errors.
How is that perfect for education?
@naught101 @resuna @david_chisnall as a support tooll during homework, where it can give additional explanation, I see a bright future for the current best models (for highschool level assignments) , for text based tasks they are even better (not strange for LLMs) . Ofcourse people have to learn to check and not fully trust, at the same time there is a lot of added value. It's my personal/micro observation but i see it confirmed in various papers