@dalias they want badly to believe that minds can be reduced to simple stochastic models. This wasn't an entirely unreasonable hypothesis 20 years ago, but at this point it doesn't look like it's correct.
Top-level
@dalias they want badly to believe that minds can be reduced to simple stochastic models. This wasn't an entirely unreasonable hypothesis 20 years ago, but at this point it doesn't look like it's correct. 1 comment
|
@ravenonthill The concept is still vaguely plausible, but their idea for how to achieve it is utter bullshit.
If you compare how human minds are "trained", there are multiple feedback layers in the form of consequences, and most importantly, we select very carefully what training inputs are used rather than throwing giant mostly wrong and mostly evil corpuses at children, and most of the training is experiential not ingesting word soup.