@AlexanderESmith I didn't say it wasn't for a good cause - but it is poisoning their LLM model.

Feeding it data like you described in your comment OP will make its output wonky, inaccurate, and hopefully useless. Which is justifiable imo when your LLM/AI model data well is built without any shred of ethics.