Email or username:

Password:

Forgot your password?
Top-level
Dana Fried

The existential risk is that the incredible repository of nearly all human knowledge that is the internet will be flooded with so much LLM-generated dreck that locating reliable information will become effectively impossible (alongside scientific journals, which are also suffering incredibly under the weight of ML spam).

The existential risk is that nobody will be able to trust a photo or video of anything because the vast majority of media will be fabricated.

41 comments
Dana Fried

The existential risk posed by AI is that we as a species will no longer be able to transmit and build on generational knowledge, which is the primary thing that has allowed human society to advance since the end of the last ice age.

Infoseepage #StopGazaGenocide

@tess LLMs are fundamentally giberish machines, confidently spouting plausible sounding nonsense in a way that human beings interpret as information. It is knowledge pollution and it will only get worse over time as the output of the tailpipe gets fed back into the inputs of the engines.

Extinction Studies

@Infoseepage @tess

So are TV news readers. Advertising. Propaganda. Psyops. Cults of personality. Fame. For profit entertainment. Distraction machines, at best, pushes to become couch potatoes.

altruios phasma

@Infoseepage @tess

Fundamental gibberish machines…

Tell me you don’t understand LLMs without telling me you don’t understand LLM’s.
We’ve long surpassed markov chains: those are probably closer to your mental model of AI.

Yep: not gibberish, but nonsense. Not sound, but reasonable-ish output.

Diction matters. Use the right words :) nonsense machines LLM’s are. Gibberish machines they are not.

altruios phasma

@InkySchwartz @Infoseepage @tess

So does my dad.

Another thing it copied from us.

Humans are just fancy copy machines. There is no innovation built in a vacuum, it’s all combinorical selection of previous states.

LLM’s are closest to “the explainer” mode/state people have (research split brain procedures for more info), but we have a chorus inside us, LLM’s need partner AI systems (to be developed), that need to model some state equilibrium between AI and the outside world…

Nonya Bidniss 🥥🌴

@Infoseepage Gibberish machines contributing massively to global warming and loss of fresh water. Knowledge pollution and environmental pollution all in one package. @tess

tuban_muzuru

@Infoseepage @tess

LLMs are not answer machines - and quit acting as if they're sposta be.

Repeat after me: an LLM cannot reason.

If you want correct answers to questions, you will need to bolt on a specialty neural net.

tuban_muzuru

@Infoseepage @tess

Do you want to see a brain bustin' answer machine? Wolfram is currently way out front on that.

Infoseepage #StopGazaGenocide

@tuban_muzuru @tess oh, I know that LLms aren't answer machines. The problem is most people don't and treat them as such and big tech is pushing them into those roles. Lots of "just answer" tech support sites are using them for content generation and if you pose plain language questions to search engines the result is increasingly likely to be generated by a LLM or you'll have organic results from the answer sites attempting to monetize clicks.

tuban_muzuru

@Infoseepage @tess

LLMs are not good for exact answers requiring reason - and the ignorance begins with people lacking any philosophical background to even define why LLMs are incapable of reason.

Hen Gymro Heb Wlad

@tuban_muzuru @Infoseepage @tess The army of techbro bullshit merchants hyping their products as "artificial intelligence" probably doesn't help to dispel the widespread illusion that there is a little person inside the bullshit engine that understands what you're asking it and trying to provide an accurate and truthful response.

Pareidolia-as-a-service.

tuban_muzuru

@hengymrohebwlad @Infoseepage @tess

Meanwhile, the truly fascinating aspects of what can be done are passed over because these AI Chicken Littles lack the fundamental vocabulary to grasp this stuff.

Pareidolia-as-a-service. Heh ! You just made my day.

AndyDearden

@Infoseepage @tess "plausible gibberish" - isn't that a key ingredient in the advertising mix? Is it any wonder that these corporations are pushing this tech?

Oblomov

@tess I'm moderately optimistic in this that we'll still have pockets of “resistance” (as in: humans that keep sharing their direct knowledge and experience), so the chain won't be broken, but it will be more restricted and harder to find. Not a great outlook, but still better than nothing. And yes, this *will* slow things down, but a possible silver lining of this is will give humans time to better adapt to the changes —at least those in the luck of orbiting around those pockets.

Kevin Karhan :verified:

@tess what if I told you that's exactly the desired outcome?

awoodland

@tess I've been trying to popularise the term "peak knowledge" to describe this problem.

Jon Ramos

@tess I would argue it's a tool like any other if abused will likely turn us into smooth brained consumer droids but social media is kinda on that already. Since it's been available to me, the majority of my use case has been expanding my knowledge and research. It's been a great tool.

I did also create some AI generated photos of puppies but who hasn't.

Snail

@cienmilojos @tess ...you... really shouldn't be using LLMs for research, unless you mean you're researching the LLMs. They do not give you correct information. They generate plausible-sounding information, which may be true more or less by accident, but is equally or more likely to be false. The more generalized the usage of the "AI", and the more specific or involved the answer, the less likely it is to be accurate about anything.

Graydon

@tess I think this outcome is a lot more of an objective than a risk.

The reliable income streams are those where you can charge people money to live. The net takes longer to enclose than housing or medicine or education, but here we are. Task-specific curated knowledge for more than you can afford.

Em :anarchistflagblack:

@tess maybe a butlerian Jihad could fix that /jk

rob los ricos

@tess

the internet has been this way to me for around 7 years now.

google is a gateway to misinformation.

Karl D

Do we have faith in human spirit enough that the noise of Ai become so damaging we return to the analoge phenomena of sitting with each other , in a room ,in a forest , at peace?

We can find trust with flesh and bone. The black mirror can be broken when we let go of its darkness.

osfa_2030

@tess Do you think that's already beginning to happen? I would say so.

BuckRogers1965

@tess

I think we still have all the old data still.

Kristoffer Lawson

@tess we already kind off see what the effect will be with product information. Try to find info about a product? Almost impossible, due to the Internet being just full of marketing crap about it. Even if you search for review, most will be generated or based (many web shops remove any reviews less than 5 stars).

I end up searching for stuff in Finnish just because the signal to noise ratio is much better. Spammers don’t bother as much with an obscure language.

Nini

@tess Might be the end goal, flood the infomation sphere with so much misinfo nothing is trusted. Get that going, couple it with wildly striated social classes based on wealth and it becomes a grim future as depicted in many dystopian media because we've been here before. An underclass barely surviving, undereducated and actively being poisoned by the wealthy living far from the squalor. The infomancers, those with real facts, become the powerful and guess who they are? The fuckin' techbros.

toerror

@tess I've thought for a while that a feature of cameras in the future might be some sort of unforgeable optical signature that functions like a physical digital sig of the image / camera combo. Not sure how that would work in practice, but I imagine it's something other people are thinking about.

CyberFrog

@toerror@mastodon.gamedev.place @tess@mastodon.social there is a business consortium group working to create a system sort of like the inverse of this already ( https://c2pa.org/ ) which would be used to tag AI content so people know it is machine generated

I believe it's currently in testing, but personally I mostly ignore it because it has several flaws that make me kind of laugh at the idea of it ever being used in the real world, one of the flaws being that you can just strip the signature and pretend the image is fine still lol

In saying this, California is currently voting to require all AI generated content be tagged with this metadata and displayed to users with relevant info about it being machine generated

https://techcrunch.com/2024/08/26/openai-adobe-microsoft-support-california-bill-requiring-watermarks-on-ai-content/

@toerror@mastodon.gamedev.place @tess@mastodon.social there is a business consortium group working to create a system sort of like the inverse of this already ( https://c2pa.org/ ) which would be used to tag AI content so people know it is machine generated

I believe it's currently in testing, but personally I mostly ignore it because it has several flaws that make me kind of laugh at the idea of it ever being used in the real world, one of the flaws being that you can just strip the signature and...

Lord Doktor Krypt3ia

@tess Try using Google as a search engine now, it’s already happened.

Jerry Orr

@tess someone said recently that pre-LLM era content will have a greater value because we *know* it wasn’t LLM generated, analogous to pre-nuclear era steel

(I wish I could remember where I saw this, because I think about it a lot)

Misha Van Mollusq 🏳️‍⚧️ ♀

@tess Butlerian Jihad Time: Though Shall not make a Machine in the Image of a Human Mind.
Eventually someone is going to come up with a Worm that attacks only LLM models .
Could probably do that by feeding it the collected works of William S. Burroughs

Louis Ingenthron

@tess
> "the internet will be flooded with so much [dreck] that locating reliable information will become effectively impossible"

Pretty sure advertisers already took care of that one long ago.

Go Up