"You might ask how AI generates something so completely bananas. It’s because AI can’t tell the difference between true and false. Instead, a complex computer program plays probabilistic language guessing games, betting on what words are most likely to follow other words. If an AI program hasn’t been trained on a subject — unusual last names, for instance — it can conjure up authoritative-seeming but false verbiage."
@gerrymcgovern thanks for sharing this; I would like to add that even when it has been trained on a subject it will still make things up. no amount of training will guarantee that it always tells the truth. it is not a training problem it is a property of the model.