Top-level
5 comments
@alyssa @feoh to me, the larger UX threat is the knowing misrepresentation of LLMs as expert systems for every use case. I do see the use-case @feoh is talking about, and I've given it a try a few times at the encouragement of others. It'sā¦ fine. But I agree that the overall effect of these systems is corrosive on trust, because as you say, it only takes one such failure to cast a shadow on everything else, even the stuff that isn't LLM output. |
@SnoopJ @alyssa I don't wish to argue, but let me give you a very concrete example:
"Write pytest unit tests for this code".
It spews out a page full of code, including all the necessary boilerplate for test setup, database setup, etc. etc.
I then take that and add the higher value tests that the LLM doesn't write.
For another example, I am a bit of a windbag. I take a block of business prose, pass it to the LLM, and say "Rewrite this for conciseness and professional tone."
If you *know english* you can validate the correctness of the prose it generates in terms of conveying intent, and if you care you can even use other tools to validate grammatical correctness.
@SnoopJ @alyssa I don't wish to argue, but let me give you a very concrete example:
"Write pytest unit tests for this code".
It spews out a page full of code, including all the necessary boilerplate for test setup, database setup, etc. etc.
I then take that and add the higher value tests that the LLM doesn't write.