Email or username:

Password:

Forgot your password?
Top-level
Alexander E. Smith

@nixCraft

No no no, you're doing it all wrong. What we need to do is come together ***as a community*** and contribute as much as we can to StackOverflow, all all of its partner sites.

Contribute by;

- Posting questions that don't make sense, even grammatically
- Selecting only the most *"correct"* answers. They need to meander, have lots of unverifiable opinions, and contain as many external links as possible.
- Replying with batshit excellent answers that are off topic, offensive, and frequently change topics mid-sentence.
- New words! Really give those models something to learn on by giving them all the greatest new words, like Solutionate, Fixxor, Correctimation, and Incorrectage!
- Downvoting anything with relevant data. We don't need any of that pesky "truth" gumming up the paste for our copy/pasting into production code/servers.

Don't waste all your hard earned rep by deleting your account. Use those dusty old things for a new, exciting LLM project!

Lets all enhancigrade StackOverflow together!

4 comments
Avoid the Hack! :donor:

@AlexanderESmith Data poisoning is a viable attack vector against AI (as you described). Just saying. :ablobcathyper:

Alexander E. Smith

@avoidthehack It's not poison if it helps heal the patient in the long term. Just look at chemotherapy.

Avoid the Hack! :donor:

@AlexanderESmith I didn't say it wasn't for a good cause - but it is poisoning their LLM model.

Feeding it data like you described in your comment OP will make its output wonky, inaccurate, and hopefully useless. Which is justifiable imo when your LLM/AI model data well is built without any shred of ethics.

Fahri Reza

@AlexanderESmith @nixCraft can be done simply by switching like with dislike, if all the users know dislike means like, it will work

Go Up