@thomasfuchs @ludicity
I wouldn't say "never"

They're great at "solving" (regurgitating answers to) problems that a human has already solved and that were scraped into the training data, but they're useless at problems not in their training dataset.
youtu.be/PeSNEXKxarU

So LLMs should be useful at designing software provided some data annotator in India has already done the work for the specific prompt you're requesting.