> Governments are nowhere near as omniscient as you think
Are you sure that's still true in the age of mass surveillance, big data, and large language models?
Top-level
> Governments are nowhere near as omniscient as you think Are you sure that's still true in the age of mass surveillance, big data, and large language models? 9 comments
@Impossible_PhD Fair enough. But now, if we assume that whatever's left of due process and rule of law are about to go away, then wouldn't looking suspicious as fuck be enough to make them come down on you with extreme prejudice, even if they haven't sifted through *all* the data? I think doc is arguing on a macro scale. If even just 5 million people look suspicious every day, it's infeasible to send cops to 5 million doors. I would caution that the machine's favorite tactic in such a case is to just pick a few and "make an example" though. and that they are more likely to pick those based on class and identity. so it's really important white guys and especially middle class white guys do their part. @chairgirlhands @Impossible_PhD Interesting. My sister and I, both middle-class white, were thinking that we should *not* look suspicious, the better to keep ourselves safe so we can protect others more at risk. @chairgirlhands @Impossible_PhD Of course, I might have already ruined that by posting here. I think a good example to keep in mind is "partner" and "significant other" being used by heterosexual people. That could be viewed as "suspicious" by homophobes. But the more straight people that do it, the harder it is for them to guess that anyone doing it is gay. Does that make sense? @matt @Impossible_PhD this is why i like poisoning the well of data in other ways, like just flat-out lying to disrupt fingerprinting using librewolf and editing metadata on photos and all that... i kinda wish there was some secure way to swap peoples' data somewhat to confuse the algorithms heavily because both would be structured data and that would make mass surveillance absolutely break entirely... not targeted surveillance though, that has different constraints |
@matt Yes.
First of all, LLMs are not intelligent. They're autocomplete on steroids. Their use actively degrades work efficiency according to every survey I'm aware of.
Second, the mass surveillance is the problem. The government now captures *trillions* of hours of video a year, plus an absolutely incomprehensible amount of text data--so much that it hasn't known what to do with it for decades.
The point of the observation regime is your post. It's the panipticon.