Email or username:

Password:

Forgot your password?
13 posts total
Emily M. Bender (she/her)

I've seen a lot of awful and ridiculous AI hype in the past two years, but this weekend there was one that briefly took away my ability to even.

I recovered, and wrote up a newsletter post:

buttondown.email/maiht3k/archi

Show previous comments
Strypey

@emilymbender
Well of course it's inevitable that we'll all keep outsourcing our thinking to MOLE (Machine Only Learning Emulators). If we didn't, we'd only let our emotions cloud our decisions about letting clouds think for us. In fact, I think it makes sense to outsource the decisions about when to outsource our decisions too.

What could go wrong? : P

strypey.dreamwidth.org/1120.ht

RootWyrm πŸ‡ΊπŸ‡¦πŸ³οΈβ€πŸŒˆ

@emilymbender the level of sheer ignorance and incompetence required for this crap is literally "people should lose their licenses permanently" level.

See also: spectrum.ieee.org/how-ibm-wats

_Pear :baba_sleepy:

@emilymbender@dair-community.social truly horrifying. I'd personally be more comfortable with a coin flip.

Emily M. Bender (she/her)

Why can't more journalists reporting on "AI" straightforwardly say "This is bad, actually"? Case in point: In this article juxtaposing the grandiose claims of OpenAI et al with the massive environmental footprint of LLMs, Goldman still has to include this weird AI optimism:

fortune.com/2024/07/12/ai-biza

>>

Show previous comments
FeralRobots

@emilymbender the totally hypothetical & completely unspecified potential.

Haelwenn /элвэн/ :triskell:
@emilymbender Well it's Fortune and journalists by default are writers for hire.
Michael Bishop β˜•

@emilymbender i listened to NPR this morning talking about the Olympics using AI and they didn't address any negatives to it.

Emily M. Bender (she/her)

We've all been laughing at the obvious fails from Google's AI Overviews feature, but there's a serious lesson in there too about how it disrupts the relational nature of information. More in the latest Mystery AI Hype Theater 3000 newsletter:

buttondown.email/maiht3k/archi

Matthew Loxton

@emilymbender
Good piece, thanks

I also wonder if the results sometimes create false memories in the ways that Dr. Elizabeth Loftus has demonstrated

Emily M. Bender (she/her)

Big Tech likes to push the trope that things are moving and changing too quickly and there's no way that regulators could really keep up --- better (on their view) to just let the innovators innovate. This is false: many of the issues stay stable over quite some time. Case in point: Here's me **5 years ago** pointing out that large language models shouldn't be used as sources of information about the world, and that doing so poses risks to the information ecosystem:

x.com/emilymbender/status/1766

Big Tech likes to push the trope that things are moving and changing too quickly and there's no way that regulators could really keep up --- better (on their view) to just let the innovators innovate. This is false: many of the issues stay stable over quite some time. Case in point: Here's me **5 years ago** pointing out that large language models shouldn't be used as sources of information about the world, and that doing so poses risks to the information ecosystem:

Show previous comments
huntingdon

@emilymbender

Tech moves so fast, so-called innovators can't keep up. So they just roll their snowballs downhill and hope for the best. Exactly the environment responsible regulators should monitor closely.

Marsh Ray

@emilymbender
@futurebird
Other things in my lifetime I've been told "shouldn't be used as sources of information":

* Social media
* Wikipedia
* Web search engines
* YouTube
* The Internet
* Web pages
* Anything you see on TV or film
* Anything from a politically affiliated source
* Anything from an astronaut
* Anything from a Freemason
* Anything from an interested party
* Anything from a detached academic (particularly economists)
* Anything from a corporation
* Anything from any elected official
* Anything from any government agency
* Anything from any Western medicine doctor or Big Pharma
* Anything from an advocate of [economic system]
* Anything from a [gender]
* Anything from a [race]
* Anything from a [nationality]
* Anything from a believer of [specific religion]
* Anything not in [ancient text]
* Anything from a believer of any religion
* Anything from an atheist
* Everything you read
* Everything you hear

The point here is that such advice is generally non-actionable, and that people are almost always better served by practical risk- and harm-reduction strategies than abstinence-only advocacy.

@emilymbender
@futurebird
Other things in my lifetime I've been told "shouldn't be used as sources of information":

* Social media
* Wikipedia
* Web search engines
* YouTube
* The Internet
* Web pages
* Anything you see on TV or film
* Anything from a politically affiliated source
* Anything from an astronaut
* Anything from a Freemason
* Anything from an interested party
* Anything from a detached academic (particularly economists)
* Anything from a corporation
* Anything from any elected official
* Anything...

Nazo

@emilymbender I would argue even if tech was moving and changing too quickly -- even if we accept the argument entirely on face value -- that would just mean that it is necessary to act faster. If some tech arose that allowed people to make atomic bombs in their basement, it would be regulated quickly, not allowed with the hopes that it will regulate itself.

Emily M. Bender (she/her)

This is funny, but also actually a really bad sign for general enshittification of the web. The most alarming detail here is that Amazon is actually promoting the use of LLMs to create fake ad copy.

arstechnica.com/ai/2024/01/laz

Show previous comments
ROTOPE~1 :yell:

@emilymbender it's really great that Amazon does not have the technological wherewithal to see such a pattern of submissions, and do a single thing about it.

Jeff, Cat Herder

@emilymbender I'm pretty happy to let my Amazon Prime membership expire in a month. It's been getting worse and worse, and with the ads in videos, ongoing failure to screen bad actors in their marketplace, and now this... yeah, I'm done.

cuan_knaggs

@emilymbender on the other hand it's quite handy that they're just telling on themselves now. makes them easier to block

Emily M. Bender (she/her)

With the OpenAI clownshow, there's been renewed media attention on the xrisk/"AI safety"/doomer nonsense. Personally, I've had a fresh wave of reporters asking me naive questions (as well as some contacts from old hands who are on top of how to handle ultra-rich man-children with god complexes). 🧡1/

Show previous comments
Amiya Behera #FBPPR

@emilymbender We rely on cryptography to make us safe.
Can AI break cryptography, unlikely, can't break the physics. Further, AI can help in making cryptography better. iambrainstorming.wordpress.com

DELETED

@emilymbender "Ultra-rich man-children with god complexes" oh my god I love this description :D

Show previous comments
Bob Davidson

@emilymbender

I feel bad now for posting this picture in 2001 on Wikipedia in an article about how hummingbirds migrate South on the backs of geese. In fact, the geese stay around here all Winter.

Pablojordi

@emilymbender Great points. Will the pollution generated by generative text and images mean a comeback of archaic sources of info such as paper books , libraries and museums?

Emily M. Bender (she/her)

This paper, by Abercrombie, Cercas Curry, Dinkar and @zeerak is a delight!

arxiv.org/abs/2305.09800

Some highlights:

Emily M. Bender (she/her)

Fig 1 is chef's kiss. The initial system response reads like it was made 'safe' but then read the improved version. I'd love to live in that world.

>>

Emily M. Bender (she/her)

Okay, taking a few moments to reat (some of) the #gpt4 paper. It's laughable the extent to which the authors are writing from deep down inside their xrisk/longtermist/"AI safety" rabbit hole.

Things they aren't telling us:
1) What data it's trained on
2) What the carbon footprint was
3) Architecture
4) Training method

>>

Okay, taking a few moments to reat (some of) the #gpt4 paper. It's laughable the extent to which the authors are writing from deep down inside their xrisk/longtermist/"AI safety" rabbit hole.

Things they aren't telling us:
1) What data it's trained on
2) What the carbon footprint was
3) Architecture
4) Training method

Emily M. Bender (she/her)

But they do make sure to spend a page and half talking about how they vewwy carefuwwy tested to make sure that it doesn't have "emergent properties" that would let is "create and act on long-term plans" (sec 2.9).

>>

Emily M. Bender (she/her)

With #Galactica and #ChatGPT I'm seeing people again getting excited about the prospect of using language models to "access knowledge" (i.e. instead of search engines). They are not fit for that purpose --- both because they are designed to just make shit up and because they don't support information literacy. Chirag Shah and I lay this out in detail in our CHIIR 2022 paper:

dl.acm.org/doi/10.1145/3498366

>>

Emily M. Bender (she/her)

Coming up in about an hour: Episode 6 of Mystery AI Hype Theater 3000, wherein @alex and I take on #Galactica and the associated #AIhype

Join us at 9:30am Pacific today (Nov 23) here: twitch.tv/katesilver538

Emily M. Bender (she/her)

Facebook (sorry: Meta) AI: Check out our "AI" that lets you access all of humanity's knowledge.

Also Facebook AI: Be careful though, it just makes shit up.

This isn't even "they were so busy asking if they could" --- but rather they failed to spend 5 minutes asking if they could.

#AL #ML #MathyMath #Bullshit #NLP #NLProc #AIhype

Emily M. Bender (she/her)

Using a large language model as a search engine was a bad idea when it was proposed by a search company. It's still a bad idea now that it's being proposed by a social media company. Fortunately, Chirag Shah and I already wrote the paper laying out all the ways in which this is a bad idea.

dl.acm.org/doi/10.1145/3498366

#AL #ML #MathyMath #Bullshit #NLP #NLProc #AIhype

Go Up