@emilymbender
@futurebird
Other things in my lifetime I've been told "shouldn't be used as sources of information":
* Social media
* Wikipedia
* Web search engines
* YouTube
* The Internet
* Web pages
* Anything you see on TV or film
* Anything from a politically affiliated source
* Anything from an astronaut
* Anything from a Freemason
* Anything from an interested party
* Anything from a detached academic (particularly economists)
* Anything from a corporation
* Anything from any elected official
* Anything from any government agency
* Anything from any Western medicine doctor or Big Pharma
* Anything from an advocate of [economic system]
* Anything from a [gender]
* Anything from a [race]
* Anything from a [nationality]
* Anything from a believer of [specific religion]
* Anything not in [ancient text]
* Anything from a believer of any religion
* Anything from an atheist
* Everything you read
* Everything you hear
The point here is that such advice is generally non-actionable, and that people are almost always better served by practical risk- and harm-reduction strategies than abstinence-only advocacy.
@marshray @emilymbender
Actions:
-do not display AI responses to questions typed into search engines at the top as if they are the definitive response.
-demote pages that use LLM generated content in searches and algorithms
-refrain from integrating AI responses for content questions in company chatbots.
there are a lot of ways this is actionable. Not often things individuals have control over, but this tech is being injected into all sorts of paces where it doesn't belong.
@marshray @emilymbender
Actions:
-do not display AI responses to questions typed into search engines at the top as if they are the definitive response.
-demote pages that use LLM generated content in searches and algorithms
-refrain from integrating AI responses for content questions in company chatbots.