Email or username:

Password:

Forgot your password?
Top-level
Dr. Quadragon ❌

@trigrax Capitalism.

Most of the problems people have with AI come not from AI itself, but from the system that employs it.

People talk about the AI misalignment problem without realizing that precisely this kind of misalignment didn't come from nowhere, it's there because we've already built the system with misaligned incentives, and it's been hard at work for over two centuries, maximizing profit at the expense of everything else - politics, ecology, education, you name it. We're already living inside the giant paperclip fucking maximizer. And it was not built yesterday.

This is the real problem, not the machine learning technology in and of itself.

7 comments
Вася

@drq The real alignment problem — the one that doomers are talking about (any doomer worth their salt, at least) — is not “there” yet. It will arrive with a superintelligence, one that will not give a damn about profits or their beneficiaries, in about the same way as you don’t give a damn about foraging ants whose anthill you inadvertently tread on.

Dr. Quadragon ❌

@trigrax We don't have what it takes to create "superintelligence". We don't know what "intelligence" is in the first place. We probably will create some kind of intelligence, if we try real hard, but it I don't believe it will be any more "super" than any of us, save for maybe experience.

In other words, "superintelligence" is science fiction stuff, IMO.

Вася

@drq Intelligence is “brain stuff”. Superintelligence is “a lot of brain stuff”.

Consider mice and humans. Both have a brain. Even their brain architectures aren’t vastly different. A blob of neurons with some identifiable areas, some of which are common between the two. Humans just have ~1000× the stuff. And humans go to the Moon, while mice go into mousetraps.

Dr. Quadragon ❌

@trigrax Whales and elephants exist. They got lots of brain stuff. Much more than humans do.

They're not "superintelligent", whatever that means.

Вася replied to Dr. Quadragon ❌

@drq Good point. It may take some hitherto unknown architectural advances, not just scaling up. But looking at the advances of neural nets over the past decade, I don’t see how it can be dismissed. It might be science fiction, but you know, a lot of science fiction stuff from 1900s and even 1950s is mundane reality now.

Dr. Quadragon ❌ replied to Вася

@trigrax

> looking at the advances of neural nets over the past decade

As I said, too easily impressed.

Yeah, I mean it's nice/horrible depending on application and context, but it's not anywhere near either a rapture nor apocalypse.

Вася replied to Dr. Quadragon ❌

@drq

> it's not anywhere near either a rapture nor apocalypse

I’m not saying it is. I’m saying the rate of progress (20 years from a stuggling ZIP code recognizer to GPT-4o), combined with the history of other fields of technology, justifies the concern that if this carries on for another couple decades, we *might* get to that humans-vs.-mice level. We can’t be sure, but it looks just plausible enough to get worried or hyped up.

Go Up