Email or username:

Password:

Forgot your password?
Lauren Weinstein

We shouldn't be worrying about AI wiping out humanity. That's a smokescreen. That's sci-fi. We need to worry about the *individuals* now and in the near future who can be hurt by the premature deployment of generative AI systems that spew wrong answers and lies, and then when asked for confirmation, lie about their own lies! And just popping up warnings to users is useless, because you know and I know that hardly anyone will read those warnings or pay any attention to them whatsoever.

6 comments
C.

@lauren

The warnings are also cold comfort to the loved ones of the guy the vigilante gang beat to death because some generated content somewhere labelled him an unrepentant pedophile in their neighbourhood.

I don't know what the solution is. Mandate that an indicator of its origin be prominently displayed within the generated text? How could that work?

#LLM #generative

C.

@lauren

While I'm at it, add a requirement that the output include a link to every piece of content the model was trained on for which the company doesn't have a contract/license/release from its creator.

Solves the uncredited, used-without-permission problem.

Lauren Weinstein

@cazabon It's not clear that even including the links will do any good. Hardly anyone will bother clicking them. The result is the GAI is a taker of data and gives little or nothing in return, not even the clicks and views that traditional SERPs generate.

C.

@lauren

My idea is that being forced to include 2 billion links for content they don't have a signed release for using, it would encourage training only using material they did, in fact, obtain permission to use as input. 😂

Simon Brooke

@lauren we should worry even more about *individuals* now and in the near future who knowingly and cynically make and market these engines of deceit in order to profit from the chaos and confusion they will cause.

DELETED

@lauren The number of people I know who are immediately trusting this technology as being infallible is terrifying to me.

Last time I used ChatGPT, for instance, I got into an argument with it about astronomy. I'm not an expert, but it's been one of my special interests for my entire four decades of existence. I've forgotten more about astronomy than the average person will ever know.

So when ChatGPT tries to tell me it would be impossible for X astronomical event to occur when there are historical records and modern observations of thousands of events that prove me right, I'm going to argue with it and even try to correct it. Unfortunately, it's like trying to correct a gaslighting narcissist, which really upsets me, because these are just learning language models. They SHOULD be learning from all inputs, not just the bull.

I really wish they never called this stuff AI. Everyone thinks Guy from Free Guy is in there somewhere, and I'm sorry, but he's not. The tech isn't remotely advanced enough yet to create even a convincing illusion of sentience, let alone actual sentience.

That day may not be far off. Could happen in as little as 10-20 years, but we won't see anything that advanced in any of our lifetimes, no matter how much we think we want it.

@lauren The number of people I know who are immediately trusting this technology as being infallible is terrifying to me.

Last time I used ChatGPT, for instance, I got into an argument with it about astronomy. I'm not an expert, but it's been one of my special interests for my entire four decades of existence. I've forgotten more about astronomy than the average person will ever know.

Go Up