Love this:

If I had to pick a way forward, I’d probably define a target like, “precisely calibrated and thoughtfully defanged implementations of double-edged affordances, grounded in user research and discussions with specialists in disinformation, extremist organizing, professional-grade abuse, emerging international norms in trust & safety, and algorithimic toxicity.”
@kissane