Email or username:

Password:

Forgot your password?
Top-level
Dana Fried

The existential risk posed by AI is that we as a species will no longer be able to transmit and build on generational knowledge, which is the primary thing that has allowed human society to advance since the end of the last ice age.

25 comments
Infoseepage #StopGazaGenocide

@tess LLMs are fundamentally giberish machines, confidently spouting plausible sounding nonsense in a way that human beings interpret as information. It is knowledge pollution and it will only get worse over time as the output of the tailpipe gets fed back into the inputs of the engines.

Extinction Studies

@Infoseepage @tess

So are TV news readers. Advertising. Propaganda. Psyops. Cults of personality. Fame. For profit entertainment. Distraction machines, at best, pushes to become couch potatoes.

altruios phasma

@Infoseepage @tess

Fundamental gibberish machines…

Tell me you don’t understand LLMs without telling me you don’t understand LLM’s.
We’ve long surpassed markov chains: those are probably closer to your mental model of AI.

Yep: not gibberish, but nonsense. Not sound, but reasonable-ish output.

Diction matters. Use the right words :) nonsense machines LLM’s are. Gibberish machines they are not.

altruios phasma

@InkySchwartz @Infoseepage @tess

So does my dad.

Another thing it copied from us.

Humans are just fancy copy machines. There is no innovation built in a vacuum, it’s all combinorical selection of previous states.

LLM’s are closest to “the explainer” mode/state people have (research split brain procedures for more info), but we have a chorus inside us, LLM’s need partner AI systems (to be developed), that need to model some state equilibrium between AI and the outside world…

Nonya Bidniss 🥥🌴

@Infoseepage Gibberish machines contributing massively to global warming and loss of fresh water. Knowledge pollution and environmental pollution all in one package. @tess

tuban_muzuru

@Infoseepage @tess

LLMs are not answer machines - and quit acting as if they're sposta be.

Repeat after me: an LLM cannot reason.

If you want correct answers to questions, you will need to bolt on a specialty neural net.

tuban_muzuru

@Infoseepage @tess

Do you want to see a brain bustin' answer machine? Wolfram is currently way out front on that.

Infoseepage #StopGazaGenocide

@tuban_muzuru @tess oh, I know that LLms aren't answer machines. The problem is most people don't and treat them as such and big tech is pushing them into those roles. Lots of "just answer" tech support sites are using them for content generation and if you pose plain language questions to search engines the result is increasingly likely to be generated by a LLM or you'll have organic results from the answer sites attempting to monetize clicks.

tuban_muzuru

@Infoseepage @tess

LLMs are not good for exact answers requiring reason - and the ignorance begins with people lacking any philosophical background to even define why LLMs are incapable of reason.

Hen Gymro Heb Wlad

@tuban_muzuru @Infoseepage @tess The army of techbro bullshit merchants hyping their products as "artificial intelligence" probably doesn't help to dispel the widespread illusion that there is a little person inside the bullshit engine that understands what you're asking it and trying to provide an accurate and truthful response.

Pareidolia-as-a-service.

tuban_muzuru

@hengymrohebwlad @Infoseepage @tess

Meanwhile, the truly fascinating aspects of what can be done are passed over because these AI Chicken Littles lack the fundamental vocabulary to grasp this stuff.

Pareidolia-as-a-service. Heh ! You just made my day.

AndyDearden

@Infoseepage @tess "plausible gibberish" - isn't that a key ingredient in the advertising mix? Is it any wonder that these corporations are pushing this tech?

Oblomov

@tess I'm moderately optimistic in this that we'll still have pockets of “resistance” (as in: humans that keep sharing their direct knowledge and experience), so the chain won't be broken, but it will be more restricted and harder to find. Not a great outlook, but still better than nothing. And yes, this *will* slow things down, but a possible silver lining of this is will give humans time to better adapt to the changes —at least those in the luck of orbiting around those pockets.

Kevin Karhan :verified:

@tess what if I told you that's exactly the desired outcome?

awoodland

@tess I've been trying to popularise the term "peak knowledge" to describe this problem.

Jon Ramos

@tess I would argue it's a tool like any other if abused will likely turn us into smooth brained consumer droids but social media is kinda on that already. Since it's been available to me, the majority of my use case has been expanding my knowledge and research. It's been a great tool.

I did also create some AI generated photos of puppies but who hasn't.

Snail

@cienmilojos @tess ...you... really shouldn't be using LLMs for research, unless you mean you're researching the LLMs. They do not give you correct information. They generate plausible-sounding information, which may be true more or less by accident, but is equally or more likely to be false. The more generalized the usage of the "AI", and the more specific or involved the answer, the less likely it is to be accurate about anything.

Go Up