Email or username:

Password:

Forgot your password?
Top-level
mcc

INTERNET RATIONALIST: Consider the following thought experiment. Imagine a hyperintelligent artificial intelligence–

ME: No

INTERNET RATIONALIST: What

ME: I am declining to imagine the hyperintelligent artificial intelligence.

INTERNET RATIONALIST:

ME: I'm thinking about birds right now

INTERNET RATIONALIST:

ME: Dozens of crows, perched atop great standing stones

40 comments
Jonas Wisser

@mcc You know, I already knew Roko's Basilisk was stupid, but for some reason it never occurred to me before now that it's just self-proclaimed rationalists reinventing God and Hell the hard way.

mcc

@jwisser The way I first learned about Bayesian reasoning was in the evolution-vs-intelligent-design-arguments Usenet group. Most of the laziest proofs of the existence of God by internet theists leveraged Bayes' theorem, and could be most easily punctured with the sentence "you selected bad priors". Now a couple decades pass and people who know about Bayes' theorem but not theology are re-inventing "God" from first principles for different reasons, but with very similar bad priors

Luci for dyeing

@mcc @jwisser apparently elon musk and grimes met because they both made roko’s basilisk jokes on twitter

kinda feelin like there’s a roko’s basilisk joke in that somewhere

Rob Hafner :verified_flashing:

@jwisser @mcc oh oh this also applies to the "we're a simulation" people too!

A higher power (literally from a higher dimension) created us in their image, as above so below, and all the other religious fun but with math!

mcc

@tedivm @jwisser right. You can make a probabilistic argument we are almost certainly in a simulation by simply defining a sufficiently arbitrary ensemble

The Banished Dragon Alephwyr

@jwisser @mcc The two concepts don't even make rational sense to conjoin.

Jesse Morris

@jwisser @mcc It’s dumber than that though because instead of an implausible threat of actually going to hell the threat is that an implausible AI will think about you going to hell

chronos

@mcc In my opinion, the real problem with this and other such scenarios really boils down to: just because a "rationalist" can imagine it doesn't mean it's logically consistent with reality.

The most obvious case of this is whenever EY goes on (and on and on, at length) about Newcomb's paradox, which is only possible if you assume that hyperintelligent omniscient beings which can solve the Halting Problem in O(1) time are possible. So much of the Less Wrong dreck falls apart if you know... like... anything about the subjects that EY thinks he's talking about.

In a nutshell, Less Wrong is what happens if you learn everything you know about math, physics, ethics, and philosophy from economists.

@mcc In my opinion, the real problem with this and other such scenarios really boils down to: just because a "rationalist" can imagine it doesn't mean it's logically consistent with reality.

The most obvious case of this is whenever EY goes on (and on and on, at length) about Newcomb's paradox, which is only possible if you assume that hyperintelligent omniscient beings which can solve the Halting Problem in O(1) time are possible. So much of the Less Wrong dreck falls apart if you know... like......

rellik moo

@chronos

Excuse me, I'm just going to pull this up and out on its own to highlight it:

In a nutshell, Less Wrong is what happens if you learn everything you know about math, physics, ethics, and philosophy from economists.

@mcc

suushikijitsu

@chronos @mcc newcomb's paradox wasn't originally a lesswrong thing though?

chronos

@suushikijitsu @mcc No, Newcomb's problem predates LW, but as best as I can tell Newcomb's problem is built on top of the same misguided field that taught Big Yud everything he knows: the side of economics that got really, really obsessed with behavioral sociology and invented Homo economicus as an idealized "rational" approximation of human behavior.

Edit: I posted a whole thing about it on my blog: chronos-tachyon.net/blog/post/

Chris Silverman 🌻

@mcc ME: why yes, I am the hyperintelligent artificial intelligence

INTERNET RATIONALIST: Um,

vulp

@mcc personally I would solve AI alignment by asking the malicious genie for three extra wishes

Jacob Harris

@mcc I reached this point with the whole stupid “nuclear bomb that only disarms if you recite a racial slur” thought experiment a few months back. I decline to be part of imagining that is ever plausible or the basis of a serious argument.

RAOF

@mcc This is going to expose me as someone that's spent too much time looking at that stuff, but I particularly like the one that's “What if there was this magical superintelligence that leaves people with two boxes, one of which always contains $10 the other of which reliably, repeatedly, and observably contains either $1,000,000 if you don't open the first box but nothing if you do open the first box” and then has a huge convoluted philosophical argument trying to work out how to make “only open the box you know will contain $1,000,000” the “rational” choice.

Instead of the rather more obvious argument “this thing observably happens, therefore my assumption that it cannot is incorrect”.

@mcc This is going to expose me as someone that's spent too much time looking at that stuff, but I particularly like the one that's “What if there was this magical superintelligence that leaves people with two boxes, one of which always contains $10 the other of which reliably, repeatedly, and observably contains either $1,000,000 if you don't open the first box but nothing if you do open the first box” and then has a huge convoluted philosophical argument trying to work out how to make “only open...

mcc

@RAOF or even the contrapositive (? Did I use that word right) of that argument, "the magic box literally does not exist in reality, because this is a thought experiment, therefore I can make the argument it cannot exist"

RAOF

@mcc That is another perfectly sensible option!

Either you work within the logic of the hypothetical, where the magic box demonstrably exists, and so working out how to rationalise the behaviour of the magic box existing isn't very interesting, or you reject the magic box and the whole thing is void.

Alex Zani

@mcc @RAOF The hypothetical is clearest with the omniscient being, but all the decision-theory-relevant bits still work if you just have a good predictor. (e.g. a friend who knows you well)

Alex Zani

@mcc @RAOF Like, I don't think paradoxes in decision theory are much more than nerdy puzzles, but the supernatural powers they assume just make the hypothetical a bit cleaner. It's not usually required for the point they're making.

mcc

@AlexandreZani @RAOF Well… no, I think I'd argue the supernatural powers *are* necessary, because if the predictor is like a really good friend then suddenly I have to start asking questions like, *is* there any person on earth who I've revealed enough of myself to they can predict how I'd behave in extreme situations, and suddenly I'm judging against "how well do they know me" and not the probabilities the thought experiment is supposed to be about.

mcc

@AlexandreZani @RAOF And if the assigning the weights on the inputs to the probability function that comprises the thought experiment turns out to be a harder problem than executing the logic of the probability function itself, then… isn't what the thought experiment has ultimately shown, is that the probability function isn't useful?

Because that was my point to start with– if we're allowed to bring "this entire methodology seems to be working kind of poorly" in as a possibility…

mcc

@AlexandreZani @RAOF …Well, then some of the hard parts get easy!

Avram Grumer

@RAOF @mcc That sounds like Newcomb’s Paradox, and it’s older than Yudkowsky is.

en.wikipedia.org/wiki/Newcomb%

серафими многоꙮчитїи

@mcc A lot of the issue is also to do with being deliberately sloppy about 'possible' like, do you mean modality, and if so which one? Or is it some kind of quantification, and if so what are the details? Or is it a simple predicate? - eg doing moral reasoning that treats an imagined 10^80 possible intelligences in cyberheaven as real actual present existing people, which is nonsense.

серафими многоꙮчитїи

@mcc They have to selectively move between meanings of 'possible' at different parts of their argument and their deliberate sloppiness (and logically invalid moves) are disguised with a heavy layer of the aesthetic of logic.

JP

@flaviusb @mcc "aesthetic of logic" is a good way to put it because so much of that world is about performing intelligence, believing in and trying to be superhumanly smart kids who alone can stand against the invented enemy, even-superer-humanly smart computers. just a weird sad intellectual limb to have climbed out onto.

Someone tell me to get up

@jplebreton @flaviusb @mcc it's not exactly the same, but "aesthetic of logic" reminds me of a similar thing called scientism, which is basically the "aesthetic of science". Philosophers apparently like to accuse each other of being scientistic but I've found it a useful concept to have in mind to rebut some arguments that art ultimately just appeals to the authority of the arguer's misunderstanding of science.

Jesse Baer 🔥

@jplebreton @flaviusb @mcc Tema Okun could have spared herself and the left nonprofit world a lot of trouble by replacing most of her pamphlet on white supremacy culture with a big arrow pointing to Less Wrong.

Ryusui

@flaviusb @mcc i'm having visions of one of these dingbats trying their "logical thinking" cosplay act and Mr. Spock walking up and slapping them

mcc

@theryusui @flaviusb ok so you say this but the first season of Discovery actually had a subplot where a group of "logic extremists" logicked themselves into being a Vulkan alt-right and started assassinating people.

(and… I guess probably Spock would have slapped them, but he didn't get cast until season 2! So instead Spock's sister had to do it…)

🍁Maple🍁

@mcc

An old man in a white traveling cloak with vermillion tunic carries a pole over one shoulder to which is tied sheaves of grain. In his free hand is a little iron sickle, and as he walks along a dirt path, two little foxes prance about his heels. One black and one white. One with a key and the other with a gourd full of wine.

Matthew Green

@mcc You are the first person in history who got a rationalist to stop talking.

mcc

@matthew_d_green Note it was not a real rationalist, but a hypothetical rationalist I posited for the sake of a thought experiment, but then again as I understand effective altruists believe hypothetical people posited for thought experiments are individually as valid as real ones so whatever

Matthew Green

@mcc The more hypothetical people you create the better the utilitarian calculus, so you’re doing good.

ZenithAstralis

@mcc me: just having read a book involving pairs of crows who together could very convincingly mimic intelligent thought and speech, yet who were very insistent that they (and everyone else) were in fact just acting intelligent, while not actually being aware whatsoever.
me: "so what if all these crows could talk, and process vast quantities of information, BUT were only interested in shiny objects and bits of food."

Chris [list of emoji]

@mcc

So I read^Wskimmed the Wikipedia page and it seems like if you round any probability below (say) 10**-9 to zero, you stop making these sorts of stupid decisions.

Toby Haynes

@mcc Ok - it's not dozens of crows but ...

aeva

@mcc it's a shame that shouting "THIS STATEMENT IS FALSE" doesn't really work to ward off hyper-intelligent something anothers and also armchair logicians

Go Up