Email or username:

Password:

Forgot your password?
Top-level
Doc Impossible

@matt Yes.

First of all, LLMs are not intelligent. They're autocomplete on steroids. Their use actively degrades work efficiency according to every survey I'm aware of.

Second, the mass surveillance is the problem. The government now captures *trillions* of hours of video a year, plus an absolutely incomprehensible amount of text data--so much that it hasn't known what to do with it for decades.

The point of the observation regime is your post. It's the panipticon.

8 comments
Nova🐧✨

@Impossible_PhD @matt not to mention LLMs are EXPENSIVE to run at scale, they're not something you can keep in a cloud and analyze all the data with unless you are gushing money like niagra falls (as is happening with OpenAI and similar) so the mass surveillance is mostly done on-device, which means you can easily control what data is analyzed by choosing what devices to keep.

If your device has an NPU and an operating system not under your control (non-rooted android, iOS, windows, etc.), it's likely going to be analyzing your data whether you want it to or not. Older android phones work with most apps anyway, because phones haven't really changed in years! Remember, regular data is practically useless to surveillance agencies (analyzing it is hard) but metadata is incredibly useful, so they'll try to make the analysis and such into features (tag your friends with their face on facebook!)

And to justify upgrading these new phones and spending all that money, they try to make it all features... they don't bother hiding away the surveillance but instead trying to make it the hot new thing... LLM summarization in iOS? also easy to do sentiment analysis (not necessarily what they're doing but it's easy). Recall in Windows is treated as a feature but is also a very searchable surveillance tool (that luckily got pushback but is still going to be activated and forced on users).

Recall got pushback because it's an incredibly tangible form of surveillance that's suuper easy for people to understand the implications of: "oh my god it screenshots what i'm doing? i'd hate it if some random person did that to me" is easier to understand than abstract privacy stuff like collecting sentiment analysis and profiling people for ad tracking.

@Impossible_PhD @matt not to mention LLMs are EXPENSIVE to run at scale, they're not something you can keep in a cloud and analyze all the data with unless you are gushing money like niagra falls (as is happening with OpenAI and similar) so the mass surveillance is mostly done on-device, which means you can easily control what data is analyzed by choosing what devices to keep.

Matt Campbell

@Impossible_PhD Fair enough. But now, if we assume that whatever's left of due process and rule of law are about to go away, then wouldn't looking suspicious as fuck be enough to make them come down on you with extreme prejudice, even if they haven't sifted through *all* the data?

your 5:00 call to go blading

I think doc is arguing on a macro scale. If even just 5 million people look suspicious every day, it's infeasible to send cops to 5 million doors.

I would caution that the machine's favorite tactic in such a case is to just pick a few and "make an example" though. and that they are more likely to pick those based on class and identity. so it's really important white guys and especially middle class white guys do their part.

@matt @Impossible_PhD

Matt Campbell

@chairgirlhands @Impossible_PhD Interesting. My sister and I, both middle-class white, were thinking that we should *not* look suspicious, the better to keep ourselves safe so we can protect others more at risk.

Matt Campbell

@chairgirlhands @Impossible_PhD Of course, I might have already ruined that by posting here.

your 5:00 call to go blading replied to Matt

I think a good example to keep in mind is "partner" and "significant other" being used by heterosexual people.

That could be viewed as "suspicious" by homophobes. But the more straight people that do it, the harder it is for them to guess that anyone doing it is gay.

Does that make sense?

@matt @Impossible_PhD

Nova🐧✨

@matt @Impossible_PhD this is why i like poisoning the well of data in other ways, like just flat-out lying to disrupt fingerprinting using librewolf and editing metadata on photos and all that... i kinda wish there was some secure way to swap peoples' data somewhat to confuse the algorithms heavily because both would be structured data and that would make mass surveillance absolutely break entirely... not targeted surveillance though, that has different constraints

Go Up