Email or username:

Password:

Forgot your password?
Top-level
Ian Douglas Scott

@georgetakei What I find interesting is how modern "AI" differs from traditional ideas about the dangers/flaws/benefits of AI. In Star Trek or Asimov or 2001 you have computers that perfectly follow logical programming. If it kills everyone, it's a logical consequence of its programming. Maybe you defeat the computer by telling it about a paradox.

Unlike with rule based systems, statistical machine learning "AI" isn't particularly logical and just follows patterns in the training dataset.

3 comments
Ian Douglas Scott

@georgetakei If it's trained on output from humans, it may instead suffer from very human flaws. Or some imperfect simulacrum of the human flaws its trained to imitate.

Which doesn't really have the same benefits as AI was supposed to offer (being unbiased, logical, infallible, etc.). And has a different set of dangers rather than those some science fiction anticipated.

Tony Hoyle

@ids1024 @georgetakei What we currently call 'AI' is a language, not logical model, so it's unsurprising it's not logical :p

It will produce perfectly formed english sentences.. nonsense ones, but syntactically correct.

Ian Douglas Scott

@tony @georgetakei Markov chain language models produce something like syntactically correct nonsense sentences. Modern deep learning models seem to do more than than, perhaps you could say it's "modeling" linguistic semantics as well.

But indeed, it's ultimately modeling language and not either logic or thought.

Go Up