Email or username:

Password:

Forgot your password?
Gerry McGovern

"You might ask how AI generates something so completely bananas. It’s because AI can’t tell the difference between true and false. Instead, a complex computer program plays probabilistic language guessing games, betting on what words are most likely to follow other words. If an AI program hasn’t been trained on a subject — unusual last names, for instance — it can conjure up authoritative-seeming but false verbiage."

kansasreflector.com/2024/06/22

5 comments
J Paul Gibson

@gerrymcgovern thanks for sharing this; I would like to add that even when it has been trained on a subject it will still make things up. no amount of training will guarantee that it always tells the truth. it is not a training problem it is a property of the model.

J Paul Gibson

@gerrymcgovern i apologise if i seemed to say that training is not important. of course it is - training it on some grounded truth will make it more likely to generate something that is truthful. what i should have said is that: even training it on truths alone cannot guarantee it will generate something truthful.

Jay

@jpaulgibson @gerrymcgovern I wanted to emphasize this too. #ChatGPT isn’t even *desgined* to produce factual information. It *happens* to produce factual information some of the time.

DocRekd

@gerrymcgovern again my takeaway is to use AI search like normal search: check all the links it provides, and treat the verbal answer as a mere preview of the search

Go Up