Email or username:

Password:

Forgot your password?
Top-level
John Kavanagh

@david_chisnall

Most models trained by Adversarial Neural Networks (i.e. 2 machine learning algorithms working together: 1 suggesting, 1 discriminating the suggestions) have the human, and therefore expensive, limit of well-labelled input data. LLM input data is inherently labelled to perfection.

This is why they are great and a leap that should be relished. Open sourced models keep access to this wonderful technology in the hands of the many and not the few.

And why cars still need drivers

2 comments
argv minus one

@gaudi

And why LLMs are not intelligent. Humans don't need all of our sensory experiences labeled for us by some higher power. We figure things out for ourselves, with only each other for guidance.

@david_chisnall

John Kavanagh

@argv_minus_one @david_chisnall I do think that living things survive utilitsing the same kind of intelligence... intuition is probably more accurate. Reasoning is a higher function for the brain and takes more energy. 方 (kata) is that form of learning/doing which frees the mind from having to reason about a task, so that it can be applied to something more worthwhile. These are kata machines

Go Up