@thetaphi @timrichards @plexus LLMs do show signs of "intentional" deceiving, however again it's merely due to the absurdly complex probability machine doing it's thing - and sometimes it's so off it spills the beans in the very same sentence. The chance of random bullshit is the same as with supposedly fact-driven requests / answers.

Indeed, since LLMs can't comprehend anything it would be nuts to create manuals with it. They are only good for tasks already described by humans ad nauseam.