Depends how you define harm. The requirement for checkers becomes exponentially greater with the use of bots and large language models. Of course, one should perform one's best efforts to check the validity of any resource, but the need for more robust and cautious checks increases the time requirements greatly.
Also, it feels like generating noise at random and then checking for anything which could be a sonnet.