Email or username:

Password:

Forgot your password?
Top-level
Вася

@drq “expectation management class” sounds like a general way to dismiss any possibility of radical change without going into detail. It sounds like, “nothing ever happens in the world, relax”. Like there’s been no Industrial Revolution, no Neolithic Revolution, no emergence of intelligence in humans.

If you allow that some things may in principle happen sometimes, you need to talk specifics.

13 comments
Dr. Quadragon ❌

@trigrax Nope. It's a general way to dismiss both fucking champagne techbros and neoluddite fearmongerers both of which outright spread misinformation.

There will be change, no doubt. It won't be as dramatic as your imagination wants it to be.

Revolutions look radical and exciting from the outside, and we love a good story. When you look close enough, though, they are a far more boring story than they look, and result from the past events slowly coming together until you wake up in a different world.

Вася

@drq So you’re merely arguing about the speed of change, not its eventual scale? How then does that invalidate the concerns of the doomers and/or the hopes of the accelerators? Even if it takes 30 years, it’s still within your lifetime. You and your loved ones will perish or get mind-uploaded or merged with the machines or go to the stars or whatever.

Dr. Quadragon ❌

@trigrax The truly eventual scale of every change is heat death of the Universe.

Dr. Quadragon ❌

@trigrax I'm simply too old and have been into technology for too long to buy into both the PR hype and alarmist hysteria around every shiny new thing.

Everything is either world-saviour or a horseman of the apocalypse, or both. The Internet, videogames, smartphones, blockchain, AI... The world is still here, so no apocalypse, and it doesn't look all that saved either. So... Fuck off, y'all monkeys are too easily impressed. Get back to me when it's time to discuss something actually relevant.

Вася

@drq What would count as actually relevant in your book?

Dr. Quadragon ❌

@trigrax Capitalism.

Most of the problems people have with AI come not from AI itself, but from the system that employs it.

People talk about the AI misalignment problem without realizing that precisely this kind of misalignment didn't come from nowhere, it's there because we've already built the system with misaligned incentives, and it's been hard at work for over two centuries, maximizing profit at the expense of everything else - politics, ecology, education, you name it. We're already living inside the giant paperclip fucking maximizer. And it was not built yesterday.

This is the real problem, not the machine learning technology in and of itself.

@trigrax Capitalism.

Most of the problems people have with AI come not from AI itself, but from the system that employs it.

People talk about the AI misalignment problem without realizing that precisely this kind of misalignment didn't come from nowhere, it's there because we've already built the system with misaligned incentives, and it's been hard at work for over two centuries, maximizing profit at the expense of everything else - politics, ecology, education, you name it. We're already living...

Вася

@drq The real alignment problem — the one that doomers are talking about (any doomer worth their salt, at least) — is not “there” yet. It will arrive with a superintelligence, one that will not give a damn about profits or their beneficiaries, in about the same way as you don’t give a damn about foraging ants whose anthill you inadvertently tread on.

Dr. Quadragon ❌

@trigrax We don't have what it takes to create "superintelligence". We don't know what "intelligence" is in the first place. We probably will create some kind of intelligence, if we try real hard, but it I don't believe it will be any more "super" than any of us, save for maybe experience.

In other words, "superintelligence" is science fiction stuff, IMO.

Вася

@drq Intelligence is “brain stuff”. Superintelligence is “a lot of brain stuff”.

Consider mice and humans. Both have a brain. Even their brain architectures aren’t vastly different. A blob of neurons with some identifiable areas, some of which are common between the two. Humans just have ~1000× the stuff. And humans go to the Moon, while mice go into mousetraps.

Dr. Quadragon ❌

@trigrax Whales and elephants exist. They got lots of brain stuff. Much more than humans do.

They're not "superintelligent", whatever that means.

Вася replied to Dr. Quadragon ❌

@drq Good point. It may take some hitherto unknown architectural advances, not just scaling up. But looking at the advances of neural nets over the past decade, I don’t see how it can be dismissed. It might be science fiction, but you know, a lot of science fiction stuff from 1900s and even 1950s is mundane reality now.

Dr. Quadragon ❌ replied to Вася

@trigrax

> looking at the advances of neural nets over the past decade

As I said, too easily impressed.

Yeah, I mean it's nice/horrible depending on application and context, but it's not anywhere near either a rapture nor apocalypse.

Вася replied to Dr. Quadragon ❌

@drq

> it's not anywhere near either a rapture nor apocalypse

I’m not saying it is. I’m saying the rate of progress (20 years from a stuggling ZIP code recognizer to GPT-4o), combined with the history of other fields of technology, justifies the concern that if this carries on for another couple decades, we *might* get to that humans-vs.-mice level. We can’t be sure, but it looks just plausible enough to get worried or hyped up.

Go Up