@gerrymcgovern thanks for sharing this; I would like to add that even when it has been trained on a subject it will still make things up. no amount of training will guarantee that it always tells the truth. it is not a training problem it is a property of the model.
@gerrymcgovern i apologise if i seemed to say that training is not important. of course it is - training it on some grounded truth will make it more likely to generate something that is truthful. what i should have said is that: even training it on truths alone cannot guarantee it will generate something truthful.