Email or username:

Password:

Forgot your password?
Top-level
Tim Richards

@plexus LLMs produce - by definition - derivative material. Maybe suitable for instruction manuals and the like, but nothing inspirational.

3 comments
ThetaPhi

@timrichards @plexus No, instruction manuals worth reading would require a solid connection to facts, truth and reality. Those are concepts that do not have a place in the stochastic parrots that masquerade as LLMs that masquerade as AIs.

They can't even lie, as that would necessitate knowledge about truth and falsehood, and intent to deceive. The result is bullshit, in the sense of Harry Frankfurt.

Natasha Nox πŸ‡ΊπŸ‡¦πŸ‡΅πŸ‡Έ

@thetaphi @timrichards @plexus LLMs do show signs of "intentional" deceiving, however again it's merely due to the absurdly complex probability machine doing it's thing - and sometimes it's so off it spills the beans in the very same sentence. The chance of random bullshit is the same as with supposedly fact-driven requests / answers.

Indeed, since LLMs can't comprehend anything it would be nuts to create manuals with it. They are only good for tasks already described by humans ad nauseam.

Hobson Lane

@timrichards
Indeed. It's a statistical model ... of word sequences scraped without permission from Reddit, Wikipedia, Xitter, and popular social media posts.
@plexus

Go Up