@alyssa LLMs are an excellent way of getting quick answers to some problems, and if you're open, learn a thing or two that you haven't considered. It's useful. But taking any LLM output at face value without double checking means you're foolish and naive.
@aris
They're not actually useful for that either. They *look* useful for that, but they're actually just as garbage at that as they are at other tasks beyond "stringing together reasonable-seeming English text."