@georgetakei What I find interesting is how modern "AI" differs from traditional ideas about the dangers/flaws/benefits of AI. In Star Trek or Asimov or 2001 you have computers that perfectly follow logical programming. If it kills everyone, it's a logical consequence of its programming. Maybe you defeat the computer by telling it about a paradox.
Unlike with rule based systems, statistical machine learning "AI" isn't particularly logical and just follows patterns in the training dataset.
@georgetakei If it's trained on output from humans, it may instead suffer from very human flaws. Or some imperfect simulacrum of the human flaws its trained to imitate.
Which doesn't really have the same benefits as AI was supposed to offer (being unbiased, logical, infallible, etc.). And has a different set of dangers rather than those some science fiction anticipated.