@andrewgretton My problem with LLMs is that they are trained to provide confident answers that imply they are correct, even when they are dangerously wrong, so I have no way of knowing whether a given response is correct. I’ve spent 60 years learning how to ask and answer questions to further knowledge. LLMs make that quest harder, not easier.
@bhawthorne @andrewgretton And if you know that "X of Y" answers are wrong or even dangerous, what are you going to do with each answer you get? Assume it's not part of 'X' and YOLO?
Otherwise you have to do the research to find out whether the answer is in 'X', which means you've done more work overall than if you'd just not used the LLM in the first place.