Email or username:

Password:

Forgot your password?
Emily M. Bender (she/her)

Big Tech likes to push the trope that things are moving and changing too quickly and there's no way that regulators could really keep up --- better (on their view) to just let the innovators innovate. This is false: many of the issues stay stable over quite some time. Case in point: Here's me **5 years ago** pointing out that large language models shouldn't be used as sources of information about the world, and that doing so poses risks to the information ecosystem:

x.com/emilymbender/status/1766

24 comments
Chookbot

@emilymbender Would like to read it but twittex won't let me in any more.

Quinn9282 🖥️🌙✌️

@emilymbender Can you post a copy of the thread here? Twitter doesn't let you view post threads anymore without logging in.

Kaito

@Quinn9282 @emilymbender There's a certain flair that this fact needs to be pointed out now.

Felichs

@emilymbender
I find the use of the word "ecosystem" problematic when talking about tech. It suggests a naturally occurring system with everything in balance and harmony. Does that sound like tech in 2024?

It's language used to dissuade tampering from outside. We want to protect natural ecosystems, right?

Gated community, siloed systems, anything is better than ecosystem as a descriptor.

Peva Blanchard

@felichsdakatze @emilymbender I agree with the concern about describing "tech ecosystem". However, I think "information ecosystem" is not the "tech system". The information ecosystem is the means by which information is transmitted, so we could say it has been there for a much longer time than computer-based tech.

You are right though about the connotation that an ecosystem is something "natural", independent of us, and that we should protect away from us. I believe though that we should put down the connotation, not the concept itself.

@felichsdakatze @emilymbender I agree with the concern about describing "tech ecosystem". However, I think "information ecosystem" is not the "tech system". The information ecosystem is the means by which information is transmitted, so we could say it has been there for a much longer time than computer-based tech.

Felichs

@mmin @emilymbender

I approve. I also would accept cesspit or grease trap.

DELETED

@emilymbender The world should listen to you more. ♥

Androcat

@emilymbender Technology moves fast, but the cynical view always keeps up, because techbros are just chasing money and can be expected to do all the wrong things.

oBoo!rosaur

@emilymbender ah yes. Five years ago experts gave warnings but "there isn't enough evidence yet"; five years later "regulators cannot keep up with the pace of change".

Andreas K

@emilymbender

🤷 But then the issues are not exactly surprising to even BSc level students of the field, and have been known for years, as you say.

But I guess a realistic understanding of the technology is only an obstacle if you are trying to collect checks from investors. Technology cheerleaders do way better in these jobs.

pascoda

@emilymbender

and discussions about how algorithmic systems are biased, will accelerate discrimination, and therefore should be de-biased have been around *at least* since 2016 (which is when I got into the topic, 8 years ago by now).

MarkRDavid

@emilymbender

Proof that it's tech that moves too fast, but the senescent, male narcissists charged with regulating it don't move at all.

Bender for Senate!

Bryan Redeagle

@emilymbender@dair-community.social My own experience in tech has taught me that people in tech thinks they know more about a field than the folks that study it.

Like that dip shit that thought he revolutionized nutrition and eating when he made Soylent.

Ľuboš Moščovič

@emilymbender
Well, but regulators don't keep up, that's true statement. No matter you've shown them an issue 5 years ago.

huntingdon

@emilymbender

Tech moves so fast, so-called innovators can't keep up. So they just roll their snowballs downhill and hope for the best. Exactly the environment responsible regulators should monitor closely.

Marsh Ray

@emilymbender
@futurebird
Other things in my lifetime I've been told "shouldn't be used as sources of information":

* Social media
* Wikipedia
* Web search engines
* YouTube
* The Internet
* Web pages
* Anything you see on TV or film
* Anything from a politically affiliated source
* Anything from an astronaut
* Anything from a Freemason
* Anything from an interested party
* Anything from a detached academic (particularly economists)
* Anything from a corporation
* Anything from any elected official
* Anything from any government agency
* Anything from any Western medicine doctor or Big Pharma
* Anything from an advocate of [economic system]
* Anything from a [gender]
* Anything from a [race]
* Anything from a [nationality]
* Anything from a believer of [specific religion]
* Anything not in [ancient text]
* Anything from a believer of any religion
* Anything from an atheist
* Everything you read
* Everything you hear

The point here is that such advice is generally non-actionable, and that people are almost always better served by practical risk- and harm-reduction strategies than abstinence-only advocacy.

@emilymbender
@futurebird
Other things in my lifetime I've been told "shouldn't be used as sources of information":

* Social media
* Wikipedia
* Web search engines
* YouTube
* The Internet
* Web pages
* Anything you see on TV or film
* Anything from a politically affiliated source
* Anything from an astronaut
* Anything from a Freemason
* Anything from an interested party
* Anything from a detached academic (particularly economists)
* Anything from a corporation
* Anything from any elected official

myrmepropagandist

@marshray @emilymbender

Actions:

-do not display AI responses to questions typed into search engines at the top as if they are the definitive response.
-demote pages that use LLM generated content in searches and algorithms
-refrain from integrating AI responses for content questions in company chatbots.

there are a lot of ways this is actionable. Not often things individuals have control over, but this tech is being injected into all sorts of paces where it doesn't belong.

@marshray @emilymbender

Actions:

-do not display AI responses to questions typed into search engines at the top as if they are the definitive response.
-demote pages that use LLM generated content in searches and algorithms
-refrain from integrating AI responses for content questions in company chatbots.

Marsh Ray

@futurebird @emilymbender +1 agree.

We developed "typographic conventions" that allow us to reproduce the words of others with proper attribution. Japanese has a whole separate character used to write names and words of foreign origin.

We really ought to consider adopting such a convention for AI-generated text.

Those training the AI models are likely to find it incredibly useful as well.

Nazo

@marshray @futurebird @emilymbender Technically katakana was just what was used for Japanese a really really long time ago. As it completely fell out of use it was repurposed. In some ways it's like how we give Latin names to modern things.

EDIT: Well, I stand corrected. Wikipedia says it was for transliteration from the start.

Though that's from Wikipedia, so... 😁

Nazo

@marshray @emilymbender @futurebird This is a little too extreme. For example, Wikipedia at least *tries* to maintain some vague semblance of detachment. They don't always and there are whole articles about how certain section get sort of taken over by bias, but it shouldn't just be off-handedly disregarded either.

Similarly some political stuff comes from inside sources and can only really be gotten from an affiliated source. You have to take this with a grain of salt, but can't ignore it.

Marsh Ray

@nazokiyoubinbou The point isn't that any of those things are wholly good or bad, it's that admonitions to simply disregard available sources of information are practically never effective.

Nazo

@emilymbender I would argue even if tech was moving and changing too quickly -- even if we accept the argument entirely on face value -- that would just mean that it is necessary to act faster. If some tech arose that allowed people to make atomic bombs in their basement, it would be regulated quickly, not allowed with the hopes that it will regulate itself.

Go Up