Email or username:

Password:

Forgot your password?
Top-level
ThetaPhi

@timrichards @plexus No, instruction manuals worth reading would require a solid connection to facts, truth and reality. Those are concepts that do not have a place in the stochastic parrots that masquerade as LLMs that masquerade as AIs.

They can't even lie, as that would necessitate knowledge about truth and falsehood, and intent to deceive. The result is bullshit, in the sense of Harry Frankfurt.

1 comment
Natasha Nox πŸ‡ΊπŸ‡¦πŸ‡΅πŸ‡Έ

@thetaphi @timrichards @plexus LLMs do show signs of "intentional" deceiving, however again it's merely due to the absurdly complex probability machine doing it's thing - and sometimes it's so off it spills the beans in the very same sentence. The chance of random bullshit is the same as with supposedly fact-driven requests / answers.

Indeed, since LLMs can't comprehend anything it would be nuts to create manuals with it. They are only good for tasks already described by humans ad nauseam.

Go Up