Email or username:

Password:

Forgot your password?
Top-level
Ian Douglas Scott

@georgetakei If it's trained on output from humans, it may instead suffer from very human flaws. Or some imperfect simulacrum of the human flaws its trained to imitate.

Which doesn't really have the same benefits as AI was supposed to offer (being unbiased, logical, infallible, etc.). And has a different set of dangers rather than those some science fiction anticipated.

2 comments
Tony Hoyle

@ids1024 @georgetakei What we currently call 'AI' is a language, not logical model, so it's unsurprising it's not logical :p

It will produce perfectly formed english sentences.. nonsense ones, but syntactically correct.

Ian Douglas Scott

@tony @georgetakei Markov chain language models produce something like syntactically correct nonsense sentences. Modern deep learning models seem to do more than than, perhaps you could say it's "modeling" linguistic semantics as well.

But indeed, it's ultimately modeling language and not either logic or thought.

Go Up