@popeawesomexiii right, that's why I'm mentioning the bits about federation. Understanding and internalizing that will help people understand how and why this place is different.
We serve the same basic purpose but without a corporate overlord.
Top-level
@popeawesomexiii right, that's why I'm mentioning the bits about federation. Understanding and internalizing that will help people understand how and why this place is different. We serve the same basic purpose but without a corporate overlord. 6 comments
@cobecio @ajroach42 All sites have their terms and conditions. US Law is pretty clear that if Person X posts a terrorist threat on Instagram, Instagram can't be sued (for example). @popeawesomexiii @ajroach42 I'm not familiar with US legislation. Has the platform any obligation to monitor and or report threats? @cobecio @ajroach42 Now that I don't know, I'm curious as to the response from OP. The reporting system is much easier and transparent here (simply tell the person running your instance), but beyond that, I'm new here. @popeawesomexiii @cobecio As long as section 230 of the communication decency act remains intact, or until the next time the fascists take office. @cobecio @popeawesomexiii The answer to the first question largely depends on the jurisdiction of the parties involved, but in the US at least, the person who took the illegal action, and possible the person who operated the instance they were on if they were notified about the activity and didn't curtail it, at least based on my reading of case law related to email. But I'm not a lawyer, and I'm definitely not a fediverse lawyer. What do you mean by shared policy/Angry mob? |
@ajroach42 @popeawesomexiii while I appreciate the whole federation idea, I'm still puzzled about who would be held responsible for what happens on platform (or one of its instances), in case of legally relevant misconduct. Also: is there a shared policy to prevent angry mob situations, as seen on other social media?