Email or username:

Password:

Forgot your password?
Dana Fried

Just a reminder that the "existential risk" from AI is not that somehow we'll make Skynet or the computers from The Matrix.

Nobody is going to give a large language model the nuclear codes.

The existential risk is to marginalized people who will be silently refused jobs or health care or parole, or who will be targeted by law enforcement or military action because of an ML model's inherent bias, and that because these models are black boxes, it will be nearly impossible for victims to appeal.

106 comments
Dana Fried

The existential risk is that the incredible repository of nearly all human knowledge that is the internet will be flooded with so much LLM-generated dreck that locating reliable information will become effectively impossible (alongside scientific journals, which are also suffering incredibly under the weight of ML spam).

The existential risk is that nobody will be able to trust a photo or video of anything because the vast majority of media will be fabricated.

Dana Fried

The existential risk posed by AI is that we as a species will no longer be able to transmit and build on generational knowledge, which is the primary thing that has allowed human society to advance since the end of the last ice age.

Infoseepage #StopGazaGenocide

@tess LLMs are fundamentally giberish machines, confidently spouting plausible sounding nonsense in a way that human beings interpret as information. It is knowledge pollution and it will only get worse over time as the output of the tailpipe gets fed back into the inputs of the engines.

Extinction Studies

@Infoseepage @tess

So are TV news readers. Advertising. Propaganda. Psyops. Cults of personality. Fame. For profit entertainment. Distraction machines, at best, pushes to become couch potatoes.

altruios phasma

@Infoseepage @tess

Fundamental gibberish machines…

Tell me you don’t understand LLMs without telling me you don’t understand LLM’s.
We’ve long surpassed markov chains: those are probably closer to your mental model of AI.

Yep: not gibberish, but nonsense. Not sound, but reasonable-ish output.

Diction matters. Use the right words :) nonsense machines LLM’s are. Gibberish machines they are not.

Nonya Bidniss 🥥🌴

@Infoseepage Gibberish machines contributing massively to global warming and loss of fresh water. Knowledge pollution and environmental pollution all in one package. @tess

tuban_muzuru

@Infoseepage @tess

LLMs are not answer machines - and quit acting as if they're sposta be.

Repeat after me: an LLM cannot reason.

If you want correct answers to questions, you will need to bolt on a specialty neural net.

tuban_muzuru

@Infoseepage @tess

Do you want to see a brain bustin' answer machine? Wolfram is currently way out front on that.

Infoseepage #StopGazaGenocide

@tuban_muzuru @tess oh, I know that LLms aren't answer machines. The problem is most people don't and treat them as such and big tech is pushing them into those roles. Lots of "just answer" tech support sites are using them for content generation and if you pose plain language questions to search engines the result is increasingly likely to be generated by a LLM or you'll have organic results from the answer sites attempting to monetize clicks.

tuban_muzuru

@Infoseepage @tess

LLMs are not good for exact answers requiring reason - and the ignorance begins with people lacking any philosophical background to even define why LLMs are incapable of reason.

AndyDearden

@Infoseepage @tess "plausible gibberish" - isn't that a key ingredient in the advertising mix? Is it any wonder that these corporations are pushing this tech?

Oblomov

@tess I'm moderately optimistic in this that we'll still have pockets of “resistance” (as in: humans that keep sharing their direct knowledge and experience), so the chain won't be broken, but it will be more restricted and harder to find. Not a great outlook, but still better than nothing. And yes, this *will* slow things down, but a possible silver lining of this is will give humans time to better adapt to the changes —at least those in the luck of orbiting around those pockets.

Kevin Karhan :verified:

@tess what if I told you that's exactly the desired outcome?

awoodland

@tess I've been trying to popularise the term "peak knowledge" to describe this problem.

Jon Ramos

@tess I would argue it's a tool like any other if abused will likely turn us into smooth brained consumer droids but social media is kinda on that already. Since it's been available to me, the majority of my use case has been expanding my knowledge and research. It's been a great tool.

I did also create some AI generated photos of puppies but who hasn't.

Graydon

@tess I think this outcome is a lot more of an objective than a risk.

The reliable income streams are those where you can charge people money to live. The net takes longer to enclose than housing or medicine or education, but here we are. Task-specific curated knowledge for more than you can afford.

Em :anarchistflagblack:

@tess maybe a butlerian Jihad could fix that /jk

rob los ricos

@tess

the internet has been this way to me for around 7 years now.

google is a gateway to misinformation.

Karl D

Do we have faith in human spirit enough that the noise of Ai become so damaging we return to the analoge phenomena of sitting with each other , in a room ,in a forest , at peace?

We can find trust with flesh and bone. The black mirror can be broken when we let go of its darkness.

osfa_2030

@tess Do you think that's already beginning to happen? I would say so.

BuckRogers1965

@tess

I think we still have all the old data still.

Kristoffer Lawson

@tess we already kind off see what the effect will be with product information. Try to find info about a product? Almost impossible, due to the Internet being just full of marketing crap about it. Even if you search for review, most will be generated or based (many web shops remove any reviews less than 5 stars).

I end up searching for stuff in Finnish just because the signal to noise ratio is much better. Spammers don’t bother as much with an obscure language.

Nini

@tess Might be the end goal, flood the infomation sphere with so much misinfo nothing is trusted. Get that going, couple it with wildly striated social classes based on wealth and it becomes a grim future as depicted in many dystopian media because we've been here before. An underclass barely surviving, undereducated and actively being poisoned by the wealthy living far from the squalor. The infomancers, those with real facts, become the powerful and guess who they are? The fuckin' techbros.

toerror

@tess I've thought for a while that a feature of cameras in the future might be some sort of unforgeable optical signature that functions like a physical digital sig of the image / camera combo. Not sure how that would work in practice, but I imagine it's something other people are thinking about.

CyberFrog

@toerror@mastodon.gamedev.place @tess@mastodon.social there is a business consortium group working to create a system sort of like the inverse of this already ( https://c2pa.org/ ) which would be used to tag AI content so people know it is machine generated

I believe it's currently in testing, but personally I mostly ignore it because it has several flaws that make me kind of laugh at the idea of it ever being used in the real world, one of the flaws being that you can just strip the signature and pretend the image is fine still lol

In saying this, California is currently voting to require all AI generated content be tagged with this metadata and displayed to users with relevant info about it being machine generated

https://techcrunch.com/2024/08/26/openai-adobe-microsoft-support-california-bill-requiring-watermarks-on-ai-content/

@toerror@mastodon.gamedev.place @tess@mastodon.social there is a business consortium group working to create a system sort of like the inverse of this already ( https://c2pa.org/ ) which would be used to tag AI content so people know it is machine generated

I believe it's currently in testing, but personally I mostly ignore it because it has several flaws that make me kind of laugh at the idea of it ever being used in the real world, one of the flaws being that you can just strip the signature and...

Lord Doktor Krypt3ia

@tess Try using Google as a search engine now, it’s already happened.

Jerry Orr

@tess someone said recently that pre-LLM era content will have a greater value because we *know* it wasn’t LLM generated, analogous to pre-nuclear era steel

(I wish I could remember where I saw this, because I think about it a lot)

Misha Van Mollusq 🏳️‍⚧️ ♀

@tess Butlerian Jihad Time: Though Shall not make a Machine in the Image of a Human Mind.
Eventually someone is going to come up with a Worm that attacks only LLM models .
Could probably do that by feeding it the collected works of William S. Burroughs

Miriam "Scary Username" Robern

@tess I feel like somewhere the monkey's paw curled down one digit when you declared "Nobody is going to give a large language model the nuclear codes."

Dana Fried

@miriamrobern literally just pull the plug if it gets unruly. It's not like the thing can power itself.

Morgan

@tess @miriamrobern if anything, finding out that LLMs had been given the nuclear codes might save us from everything else because they'd immediately get shut down if anyone with any amount of power wishes to continue living! I'd rather an existential threat that affects everyone than one that only affects those of us who can do nothing to stop it. Because others won't unless it affects them too

sabik

@raphaelmorgan @tess @miriamrobern
Didn't they have that line in WarGames (1983), that the computer can't launch anything unless they're at DEFCON 1?

Xandra Granade 🏳️‍⚧️

@tess You're likely right about the nuclear codes bit, but I'm not sure I have enough faith in humanity writ large to rule it out entirely...

Nini

@xgranade You might be worried about humanity but I'd imagine humanity would be smart. It's the venality of 1-5% of humans you need to keep an eye on, not that other 95-99% because that's what that 1-5% want you to do.

Craig Nicol

@tess given the variety of news stories about how stupid the nuclear codes were, those are probably in its training set

mcc

@tess As a minor quibble, I do want to suggest an alternate scenario (to an LLM getting the nuclear codes) which may be more likely: What if a contractor uses an LLM to fill in code they're writing on some random-ass military contract, and this code gets incorporated into the UI for the humans-with-the-nuclear-codes to launch nukes or the radar system those humans use to decide whether to launch, and the LLM introduces catastrophic bugs because it's a random number generator with a human accent

mcc

@tess Like, I do think the probability*cost = for those OTHER things you mentioned is significantly *greater* than the "doomsday via incompetence" scenario I outline, but

Neia

@mcc@mastodon.social @tess@mastodon.social for people who are more swayed by major, low-probability disasters rather than small, frequent, high-probability disasters, like with plane vs car risk levels, yeah

Ryan Castellucci :nonbinary_flag:

@mcc @tess "random number generator with a human accent" is my new favorite term to describe LLMs

sabik

@mcc @tess
Like that time the system conflated information from three different aircraft and then they shot down a civilian airliner?

(Flight 655)

Paul Shryock

@mcc @tess this seems pretty much guaranteed to happen at some point with how often "engineers" would rather copy/paste slop than do engineering, and with how little "engineers" are willing to test their code (do actual engineering).

lachlan but spooky

@tess Cathy O'Neil's 'Weapons of Math Destruction' details how awfully that goes when we're putting pre-'AI' algorithms in the role of the decision maker for things like insurance and bail. It's a terrifying and useful read. It makes me extremely nervous that what we're doing now is the same thing, but far more people who don't understand how AI works trust it far more.

William Gunn

@tess I think this is a "Yes, and..." situation. All the pieces of technology necessary to make silent autonomous killer drones have been developed. There are LLMs freely available to every terrorist cell and apocalyptic cult that can make them a lot more successful at creating bioweapons. There's a lot of serious risks in addition to the serious risk of profiling and discrimination.

Dror Bedrack

@tess "Nobody is going to give a large language model the nuclear codes."
You are too optimistic about human nature

Soc-i-eTy

@tess

Yes, and how will the energy be prioritized, for the people, or the AI?

:mastodon:

Morgan

@soc_i_ety @tess don't worry we won't have to choose, we can both use as much energy as we need, we'll just keep finding more shit to burn and introducing more carbon into the atmosphere! Don't think too hard about what happens next 😊

Soc-i-eTy

@raphaelmorgan @tess

Sarcasm. How do I like thee? Let me count the ways. Endlessly.

Hang in there! I am remaining hopeful and rooting for all the other animals. Humans will become nurturers of nature or nature will take over.

#VoteForDemocracy!

:mastodon:

INTENTIONALLY blank

@tess
✨ This ✨

And the idiots who see bad AI fake pictures and can't manage the basic bs-filtering skills to notice they're wrong.

1) AI, ML can be biased.

2) If you don't interrogate your own assumptions regularly, you're screwed.

Those are the dangers. General AI that enslaves humanity is a fairystory.

4TH3I57 EV L0V3R/ / /FL/US

@tess we're all kinda perpetual spies now..?
and .. vote with your wallet use an EV f savdis and pvtin

Kyle Memoir

@tess

Accenture built an ‘eligibility engine’ for a project client I had that neither SA clients nor their caseworkers could figure out.

A whole new class of forensic eligibility review worker was required (we called them ‘senior business analysts’ to keep pay expectations down) to sort out the mess in any but the simplest cases of over- or underpayments.

That was decades ago.

Imagine the absolute mess when 100% vs. 7-10% of pop. is entangled.

BrilliantIdiot

@tess

Well said!

I have seen a promising trend though. A surprising number of jobs I've applied to this year had a checkbox for "Don't use AI to review this application.". No idea if that's actually honored or not, but it seemed like a good sign.

Darren du Nord

@tess Is pre-2020 information inherently more trustworthy?
Can I still buy an Encyclopedia Britannica?

Asil Igarl

@tess I.remember Cory Doctorow written a pretty good article on it so let me parapharse a quote I really like. Advancing LLMs will not create machines which you are capable of speaking to for the same reason that breeding horses does not result in locomotives. LLMs are not able to process concepts - they are bs-ing the most likely word to come. By far, AI has been damaging the public by draining us of the energy by making us consider outrageous far-away at best scenarios.

As Photoshop came into existence, we did not suddenly start distrusting photographs and as AE came in we keep watching videos. AI might make faking material more accesible, and it might even become plausible one day, but verification systems nowadays are in place for a reason.

We should be focusing our efforts on improving labour conditions so that dumb people who believe the hype don't put us out of job, instead of considering the scenario that (gen)AI's might create material comparable to that of human origin.

@tess I.remember Cory Doctorow written a pretty good article on it so let me parapharse a quote I really like. Advancing LLMs will not create machines which you are capable of speaking to for the same reason that breeding horses does not result in locomotives. LLMs are not able to process concepts - they are bs-ing the most likely word to come. By far, AI has been damaging the public by draining us of the energy by making us consider outrageous far-away at best scenarios.

Andrew Burgess

@tess Timely article this week in #NatureReviews on Limiting bias in AI models for improved and equitable cancer care.

“AI applications must address and avoid known racial and gender biases to improve health care for all.”

nature.com/articles/s41568-024

Professor Hank 🤘✊ ☑ 🍻🖖

@tess Human civilization will collapse due to climate change by the of the century, if not by 2050. I'm not worried about A.I. At all.

CyberFrog

@professorhank@sfba.social @tess@mastodon.social honestly rich nations will have the resources to mostly mitigate climate change into the 2050s, in reality the poor nations will collapse and their resources will be stolen to prop up the existing global powers instead

There is a lot of good research about this, especially around climate refugees lol

interrobang0

@tess It's also worth noting that the horrors of capitalism, war, and politics are all limited by human ingenuity. If AI gets smarter than humans (and there's no reason it can't), those limits will be removed. That's what's happening with fake photos and videos already.

CyberFrog

@interrobang0@mastodon.social @tess@mastodon.social I think you severely overestimate the intelligence of existing AI systems, although the output of image and video models is impressive, fundamentally the models making them are incredibly dumb, at best those systems can be thought of as increasing existing human productivity (they require a human to operate them after all), which is honestly kinda just how all previous technology has worked

Sometimes it just takes time for society to catch up with tech, which is the era we're in right now, when AI can understand concepts and instructions correctly without humans constantly nudging it to the "correct" answer I'll be more impressed, but so far our existing systems can't even manage that lol

@interrobang0@mastodon.social @tess@mastodon.social I think you severely overestimate the intelligence of existing AI systems, although the output of image and video models is impressive, fundamentally the models making them are incredibly dumb, at best those systems can be thought of as increasing existing human productivity (they require a human to operate them after all), which is honestly kinda just how all previous technology has worked

Sometimes it just takes time for society to catch up with...

simplism

@tess the latest work to regulate AI development is regulatory capture plain and simple. People like Musk are worried that AI could endanger their power, and the mitigation is to control nit themselves.

GhostOnTheHalfShell

@tess

From what I have heard from Nate Hagen’s interview with Schectenberger, the danger is the moral license to exterminate all life to build a silicon god. Their reasoning is psychotic, but they can burn the world to a cinder pursuing it.

floydgump

@tess Anything these greed tyrants corrupt will be used for evil means.

Alf No Problem

@tess The danger of Robot Overlords is real and it is already here. Job Applicant Tracking Systems (ATS) use freaking AI to analyze and sift through the thousands of applicants' resumes submitted for one job. Your resume is never seen by the HR or the hiring manager and you get rejected by "the system" without a human ever judging if you are a good fit for the position. Wifey has been jobless for a year now and applying every day with zeal. It is insane!

James Brooke

@tess AI's main function seems to be crapifying search everywhere.

sabik

@tess @3TomatoesShort
IIRC there was already a case where automated translation introduced inconsistencies into a refugee's statement, and then those inconsistencies were used (by a human judge) to deny her asylum?

(Some sentences were mistranslated in the plural, but she'd also stated that she was alone for that part of her journey)

Jan Sandbrink

@tess

> Nobody is going to give a large language model the nuclear codes

I love your optimism about this part, because honestly I am not too sure about that.

Also wondering whether AI will manage to create NEW marginalized groups based on random criteria that no one understands...

Fintanz

@tess Humans are also biased and flawed when making decisions, but also have self interest. Hence the scenarios you describe already happen. Deferring the decisions to a computer removes the guilt for the decision maker, so will happen more frequently.

Lewis Cowles

@tess
> Nobody is going to give a large language model the nuclear codes.

We didn't think folks would put major government infrastructure in the cloud at one point.

Yes to your post, but also... I'm not so sure people are as smart or competent as we're giving them here.

CyberFrog

@lewiscowles1986@phpc.social @tess@mastodon.social the military already has a partnership with OpenAI that connects to things like drone attack systems

they will absolutely give LLMs the nuclear codes lol

Wouter Hindriks

@tess

Judge Rules $400 Million Algorithmic System Illegally Denied Thousands of People’s Medicaid Benefits

mastodon.social/@lizzard/11304

Patrick Leavy

@tess AI also enables mass surveillance in a global scale, in real time.

Nini

@tess The risk has never been a self-aware malicious AI, it's AI being applied towards malicious acts with no intervention. The opaque bureaucracy of "computer says no" destroying lives and automating institutional bias to a murderous efficiency.

Tony Vladusich

@tess

Dana, are you ... are you The Oracle?

Jason Robinson 🐍🍻

@tess@mastodon.social Yes, exactly.

Also, there is also the existential risk of the massive increase of energy usage from AI, something our planet just can't afford at this moment.

Simon Brooke

@tess

"Nobody is going to give a large language model the nuclear codes."

Oh, how naive we were...

Adam Jacobs

@tess That is among the existential risks of AI. The other really big risk from AI that I can see is its mahoosive carbon footprint.

Alex Rock

@tess What's an interesting view is that humans already do exactly what AI do, but we can condemn them if they make a mistake. AI will not ever be condemned, nor their creators, and that's as much a danger.

Majnūn

@tess
One of the reasons I love Mastodon:
Not only is this available but it has been recognized (boosted) by nearly 700 readers.

penryu

@tess @gd The core point is valid. No argument there.

But the assertion that "no one will give an AI the launch codes" is met with the same cynicism as "no one will commit auth tokens to the git repo."

Hen Gymro Heb Wlad

@tess It's like outsourcing. If you outsource a skilled process and fire the people who used to do it, you no longer have any insight into or ability to modify/fix that process. You rely on an external supplier to deliver this process reliably and efficiently, and your business is screwed if they don't.

Same principle applies whether it's an offshore IT dept providing your bank's critical infrastructure, or some LLM bullshit generator making "expert" decisions on your behalf.

Leendaal

@tess however i would not be surprised If AI somehow finds the nuclear codes in plain text while randomly scraping the net.

Dave "Wear A Goddamn Mask" Cochran :donor:

@tess oh it gets worse than that - 3-5 years after that shit goes in, all the studies are going to start reinforcing its bias. "It's crazy how much more likely [marginalized group] people are to [thing the system literally is forcing them to do] than anyone else! we should study the root causes!"

and then comes the eugenics and we know how that song goes...

Bernd Paysan R.I.P Natenom 🕯️

@tess Same as now, when people decide. These people deliver the training data for AI. Garbage in, garbage out.

Since we now know that AI degenerates when fed it's own output as training material, we have to assume that people also become more stupid the more material by average other people they read (i.e. social media makes people stupid).

Go Up