@jonny LLMs can really help when you are trying to get up to speed with a particular interface, so long as you're aware they lie to you. If you are able to identify the lies and tell them about them, then sometimes they might even generate usable code. The problem comes as soon as you start trying to do something a little bit unusual, and the LLM steps up the pace from "bullshit" to "delusional fantasy".

Absolutely agree with the problem being the dishonest presentation of results.