Email or username:

Password:

Forgot your password?
Top-level
Marsh Ray

@emilymbender
@futurebird
Other things in my lifetime I've been told "shouldn't be used as sources of information":

* Social media
* Wikipedia
* Web search engines
* YouTube
* The Internet
* Web pages
* Anything you see on TV or film
* Anything from a politically affiliated source
* Anything from an astronaut
* Anything from a Freemason
* Anything from an interested party
* Anything from a detached academic (particularly economists)
* Anything from a corporation
* Anything from any elected official
* Anything from any government agency
* Anything from any Western medicine doctor or Big Pharma
* Anything from an advocate of [economic system]
* Anything from a [gender]
* Anything from a [race]
* Anything from a [nationality]
* Anything from a believer of [specific religion]
* Anything not in [ancient text]
* Anything from a believer of any religion
* Anything from an atheist
* Everything you read
* Everything you hear

The point here is that such advice is generally non-actionable, and that people are almost always better served by practical risk- and harm-reduction strategies than abstinence-only advocacy.

5 comments
myrmepropagandist

@marshray @emilymbender

Actions:

-do not display AI responses to questions typed into search engines at the top as if they are the definitive response.
-demote pages that use LLM generated content in searches and algorithms
-refrain from integrating AI responses for content questions in company chatbots.

there are a lot of ways this is actionable. Not often things individuals have control over, but this tech is being injected into all sorts of paces where it doesn't belong.

@marshray @emilymbender

Actions:

-do not display AI responses to questions typed into search engines at the top as if they are the definitive response.
-demote pages that use LLM generated content in searches and algorithms
-refrain from integrating AI responses for content questions in company chatbots.

Marsh Ray

@futurebird @emilymbender +1 agree.

We developed "typographic conventions" that allow us to reproduce the words of others with proper attribution. Japanese has a whole separate character used to write names and words of foreign origin.

We really ought to consider adopting such a convention for AI-generated text.

Those training the AI models are likely to find it incredibly useful as well.

Nazo

@marshray @futurebird @emilymbender Technically katakana was just what was used for Japanese a really really long time ago. As it completely fell out of use it was repurposed. In some ways it's like how we give Latin names to modern things.

EDIT: Well, I stand corrected. Wikipedia says it was for transliteration from the start.

Though that's from Wikipedia, so... 😁

Nazo

@marshray @emilymbender @futurebird This is a little too extreme. For example, Wikipedia at least *tries* to maintain some vague semblance of detachment. They don't always and there are whole articles about how certain section get sort of taken over by bias, but it shouldn't just be off-handedly disregarded either.

Similarly some political stuff comes from inside sources and can only really be gotten from an affiliated source. You have to take this with a grain of salt, but can't ignore it.

Marsh Ray

@nazokiyoubinbou The point isn't that any of those things are wholly good or bad, it's that admonitions to simply disregard available sources of information are practically never effective.

Go Up