@bhawthorne I hear you and respect your choice. For me personally, LLMs have passed the utility threshold already such that even when I know that X out of Y answers are plain wrong or even potentially dangerous, it *still* has sufficient value to continue using, as long as I go in eyes open. And my assumption - a shaky one, perhaps - is that the tech will improve, along with the hardware that runs it.
@andrewgretton My problem with LLMs is that they are trained to provide confident answers that imply they are correct, even when they are dangerously wrong, so I have no way of knowing whether a given response is correct. I’ve spent 60 years learning how to ask and answer questions to further knowledge. LLMs make that quest harder, not easier.