Email or username:

Password:

Forgot your password?
Top-level
Andrew Gretton

@bhawthorne I hear you and respect your choice. For me personally, LLMs have passed the utility threshold already such that even when I know that X out of Y answers are plain wrong or even potentially dangerous, it *still* has sufficient value to continue using, as long as I go in eyes open. And my assumption - a shaky one, perhaps - is that the tech will improve, along with the hardware that runs it.

2 comments
Brian Hawthorne

@andrewgretton My problem with LLMs is that they are trained to provide confident answers that imply they are correct, even when they are dangerously wrong, so I have no way of knowing whether a given response is correct. I’ve spent 60 years learning how to ask and answer questions to further knowledge. LLMs make that quest harder, not easier.

Kevin P. Fleming

@bhawthorne @andrewgretton And if you know that "X of Y" answers are wrong or even dangerous, what are you going to do with each answer you get? Assume it's not part of 'X' and YOLO?

Otherwise you have to do the research to find out whether the answer is in 'X', which means you've done more work overall than if you'd just not used the LLM in the first place.

Go Up