Email or username:

Password:

Forgot your password?
Top-level
mcc

(This might be kind of vague so this is the kind of thing I'm talking about: en.wikipedia.org/wiki/Pascal%2 A shocking number of problems of this type that make me immediately respond with "why do you think this is a difficult problem?" seem to wind up mentioning Eliezer Yudkowsky when you look into why people are talking about them.)

69 comments
mcc

INTERNET RATIONALIST: Consider the following thought experiment. Imagine a hyperintelligent artificial intelligence–

ME: No

INTERNET RATIONALIST: What

ME: I am declining to imagine the hyperintelligent artificial intelligence.

INTERNET RATIONALIST:

ME: I'm thinking about birds right now

INTERNET RATIONALIST:

ME: Dozens of crows, perched atop great standing stones

Jonas Wisser

@mcc You know, I already knew Roko's Basilisk was stupid, but for some reason it never occurred to me before now that it's just self-proclaimed rationalists reinventing God and Hell the hard way.

mcc

@jwisser The way I first learned about Bayesian reasoning was in the evolution-vs-intelligent-design-arguments Usenet group. Most of the laziest proofs of the existence of God by internet theists leveraged Bayes' theorem, and could be most easily punctured with the sentence "you selected bad priors". Now a couple decades pass and people who know about Bayes' theorem but not theology are re-inventing "God" from first principles for different reasons, but with very similar bad priors

DELETED

@mcc @jwisser apparently elon musk and grimes met because they both made roko’s basilisk jokes on twitter

kinda feelin like there’s a roko’s basilisk joke in that somewhere

Rob Hafner :verified_flashing:

@jwisser @mcc oh oh this also applies to the "we're a simulation" people too!

A higher power (literally from a higher dimension) created us in their image, as above so below, and all the other religious fun but with math!

mcc

@tedivm @jwisser right. You can make a probabilistic argument we are almost certainly in a simulation by simply defining a sufficiently arbitrary ensemble

The Banished Dragon Alephwyr

@jwisser @mcc The two concepts don't even make rational sense to conjoin.

Jesse Morris

@jwisser @mcc It’s dumber than that though because instead of an implausible threat of actually going to hell the threat is that an implausible AI will think about you going to hell

DELETED

@mcc In my opinion, the real problem with this and other such scenarios really boils down to: just because a "rationalist" can imagine it doesn't mean it's logically consistent with reality.

The most obvious case of this is whenever EY goes on (and on and on, at length) about Newcomb's paradox, which is only possible if you assume that hyperintelligent omniscient beings which can solve the Halting Problem in O(1) time are possible. So much of the Less Wrong dreck falls apart if you know... like... anything about the subjects that EY thinks he's talking about.

In a nutshell, Less Wrong is what happens if you learn everything you know about math, physics, ethics, and philosophy from economists.

@mcc In my opinion, the real problem with this and other such scenarios really boils down to: just because a "rationalist" can imagine it doesn't mean it's logically consistent with reality.

The most obvious case of this is whenever EY goes on (and on and on, at length) about Newcomb's paradox, which is only possible if you assume that hyperintelligent omniscient beings which can solve the Halting Problem in O(1) time are possible. So much of the Less Wrong dreck falls apart if you know... like......

rellik moo

@chronos

Excuse me, I'm just going to pull this up and out on its own to highlight it:

In a nutshell, Less Wrong is what happens if you learn everything you know about math, physics, ethics, and philosophy from economists.

@mcc

suushikijitsu

@chronos @mcc newcomb's paradox wasn't originally a lesswrong thing though?

DELETED

@suushikijitsu @mcc No, Newcomb's problem predates LW, but as best as I can tell Newcomb's problem is built on top of the same misguided field that taught Big Yud everything he knows: the side of economics that got really, really obsessed with behavioral sociology and invented Homo economicus as an idealized "rational" approximation of human behavior.

Edit: I posted a whole thing about it on my blog: chronos-tachyon.net/blog/post/

Chris Silverman 🌻

@mcc ME: why yes, I am the hyperintelligent artificial intelligence

INTERNET RATIONALIST: Um,

vulp

@mcc personally I would solve AI alignment by asking the malicious genie for three extra wishes

Jacob Harris

@mcc I reached this point with the whole stupid “nuclear bomb that only disarms if you recite a racial slur” thought experiment a few months back. I decline to be part of imagining that is ever plausible or the basis of a serious argument.

RAOF

@mcc This is going to expose me as someone that's spent too much time looking at that stuff, but I particularly like the one that's “What if there was this magical superintelligence that leaves people with two boxes, one of which always contains $10 the other of which reliably, repeatedly, and observably contains either $1,000,000 if you don't open the first box but nothing if you do open the first box” and then has a huge convoluted philosophical argument trying to work out how to make “only open the box you know will contain $1,000,000” the “rational” choice.

Instead of the rather more obvious argument “this thing observably happens, therefore my assumption that it cannot is incorrect”.

@mcc This is going to expose me as someone that's spent too much time looking at that stuff, but I particularly like the one that's “What if there was this magical superintelligence that leaves people with two boxes, one of which always contains $10 the other of which reliably, repeatedly, and observably contains either $1,000,000 if you don't open the first box but nothing if you do open the first box” and then has a huge convoluted philosophical argument trying to work out how to make “only open...

mcc

@RAOF or even the contrapositive (? Did I use that word right) of that argument, "the magic box literally does not exist in reality, because this is a thought experiment, therefore I can make the argument it cannot exist"

RAOF

@mcc That is another perfectly sensible option!

Either you work within the logic of the hypothetical, where the magic box demonstrably exists, and so working out how to rationalise the behaviour of the magic box existing isn't very interesting, or you reject the magic box and the whole thing is void.

Alex Zani

@mcc @RAOF The hypothetical is clearest with the omniscient being, but all the decision-theory-relevant bits still work if you just have a good predictor. (e.g. a friend who knows you well)

Alex Zani

@mcc @RAOF Like, I don't think paradoxes in decision theory are much more than nerdy puzzles, but the supernatural powers they assume just make the hypothetical a bit cleaner. It's not usually required for the point they're making.

mcc

@AlexandreZani @RAOF Well… no, I think I'd argue the supernatural powers *are* necessary, because if the predictor is like a really good friend then suddenly I have to start asking questions like, *is* there any person on earth who I've revealed enough of myself to they can predict how I'd behave in extreme situations, and suddenly I'm judging against "how well do they know me" and not the probabilities the thought experiment is supposed to be about.

mcc

@AlexandreZani @RAOF And if the assigning the weights on the inputs to the probability function that comprises the thought experiment turns out to be a harder problem than executing the logic of the probability function itself, then… isn't what the thought experiment has ultimately shown, is that the probability function isn't useful?

Because that was my point to start with– if we're allowed to bring "this entire methodology seems to be working kind of poorly" in as a possibility…

mcc

@AlexandreZani @RAOF …Well, then some of the hard parts get easy!

Avram Grumer

@RAOF @mcc That sounds like Newcomb’s Paradox, and it’s older than Yudkowsky is.

en.wikipedia.org/wiki/Newcomb%

серафими многоꙮчитїи

@mcc A lot of the issue is also to do with being deliberately sloppy about 'possible' like, do you mean modality, and if so which one? Or is it some kind of quantification, and if so what are the details? Or is it a simple predicate? - eg doing moral reasoning that treats an imagined 10^80 possible intelligences in cyberheaven as real actual present existing people, which is nonsense.

серафими многоꙮчитїи

@mcc They have to selectively move between meanings of 'possible' at different parts of their argument and their deliberate sloppiness (and logically invalid moves) are disguised with a heavy layer of the aesthetic of logic.

JP

@flaviusb @mcc "aesthetic of logic" is a good way to put it because so much of that world is about performing intelligence, believing in and trying to be superhumanly smart kids who alone can stand against the invented enemy, even-superer-humanly smart computers. just a weird sad intellectual limb to have climbed out onto.

Someone tell me to get up

@jplebreton @flaviusb @mcc it's not exactly the same, but "aesthetic of logic" reminds me of a similar thing called scientism, which is basically the "aesthetic of science". Philosophers apparently like to accuse each other of being scientistic but I've found it a useful concept to have in mind to rebut some arguments that art ultimately just appeals to the authority of the arguer's misunderstanding of science.

Jesse Baer 🔥

@jplebreton @flaviusb @mcc Tema Okun could have spared herself and the left nonprofit world a lot of trouble by replacing most of her pamphlet on white supremacy culture with a big arrow pointing to Less Wrong.

Ryusui

@flaviusb @mcc i'm having visions of one of these dingbats trying their "logical thinking" cosplay act and Mr. Spock walking up and slapping them

mcc

@theryusui @flaviusb ok so you say this but the first season of Discovery actually had a subplot where a group of "logic extremists" logicked themselves into being a Vulkan alt-right and started assassinating people.

(and… I guess probably Spock would have slapped them, but he didn't get cast until season 2! So instead Spock's sister had to do it…)

🍁Maple🍁

@mcc

An old man in a white traveling cloak with vermillion tunic carries a pole over one shoulder to which is tied sheaves of grain. In his free hand is a little iron sickle, and as he walks along a dirt path, two little foxes prance about his heels. One black and one white. One with a key and the other with a gourd full of wine.

Matthew Green

@mcc You are the first person in history who got a rationalist to stop talking.

mcc

@matthew_d_green Note it was not a real rationalist, but a hypothetical rationalist I posited for the sake of a thought experiment, but then again as I understand effective altruists believe hypothetical people posited for thought experiments are individually as valid as real ones so whatever

Matthew Green

@mcc The more hypothetical people you create the better the utilitarian calculus, so you’re doing good.

ZenithAstralis

@mcc me: just having read a book involving pairs of crows who together could very convincingly mimic intelligent thought and speech, yet who were very insistent that they (and everyone else) were in fact just acting intelligent, while not actually being aware whatsoever.
me: "so what if all these crows could talk, and process vast quantities of information, BUT were only interested in shiny objects and bits of food."

Chris [list of emoji]

@mcc

So I read^Wskimmed the Wikipedia page and it seems like if you round any probability below (say) 10**-9 to zero, you stop making these sorts of stupid decisions.

Toby Haynes

@mcc Ok - it's not dozens of crows but ...

mausmalone

@mcc I mean the whole "give me money and there's an extremely small chance I'll make you rich tomorrow" is the whole premise of the lottery. It's not exactly a farfetched thought experiment.

mcc

@mausmalone So the thing is the reason you can form a meaningful expected value calculation when dealing with the lottery is because you actually do a priori know the government is capable of raising the amounts of money involved in the lottery. It's not a thought experiment in a vacuum, you actually do have to rely on reasoning that isn't part of the logical/mathematical construction

Jesse Miksic

@mcc @mausmalone also, even for a payoff with zero effective probability (I.e. an imaginary payoff), paying five bucks for a lottery ticket is easier to justify than, say, handing over your wallet or killing a baby

Christine Love

@mcc lmao this is the dumbest thing I've ever read in my life. constructing this argument seriously is a damning indictment of the poster but frankly the fact that Less Wrong posters didn't just laugh in his face is an even worse indictment of their entire forum culture

Christine Love

@mcc The instant they started humouring this they should have been legally required to change the name to More Wrong

Karyl :godot: :procreate:

@christinelove @mcc I have gone down this particular rabbit hole now (I blame both of you) and I have no words for the sheer stupidity involved in the creation of these thought experiments

Karyl :godot: :procreate:

@christinelove @mcc Forget mugging Pascal. Can we mug Eliezer Yudkowsky instead

Rylie ✨💖 Pavlik

@mcc I like math just fine, but thought experiments, like their "non-thought" counterparts, are not without potential harm. I feel like some folks overlook this.

phillmv

@mcc but… why would you take the mugger seriously? this is all very silly, is it that easy to mug effective altruists??

mcc

@phillmv right??! Within the context of the logical framework the problem is phrased in you could phrase your objection as "why are you assigning a non-zero probability to the idea they are telling the truth?" but someone who has no familiarity with the logical/probability tools would still be able to identify "this guy is lying" just using common sense

Dag Ågren ↙︎↙︎↙︎

@mcc @phillmv I mean, you don't even have to invoke the idea of telling the truth or not. At some finite cutoff, the probability of the payoff goes to a big fat zero, just because of physical possibility and living in a finite universe with laws.

mcc

@WAHa_06x36 @phillmv logically it is possible for someone (specifically, Joe Biden and/or Jerome Powell) to give you 10^100 dollars, but also logically following the consequences of a deterministic universe if Mr. Powell were to do that the value (purchasing power) of the dollar would instantly become effectively zero, so it would not be useful to you, and geeeez this stuff is just so obvious, how does one not immediately identify the problem is set up incorrectly

Jonas Wisser

@phillmv @mcc I feel like this question sort of implicitly explains an awful lot about the entire FTX situation.

Roy Wiggins

@mcc you do see people Online deploying a sort of weak pascal's mugging to justify nonsense, like "well sure, the odds that it's a REAL perpetual motion machine are very low, but the benefits would be nearly unlimited so we should hear him out."

(doesn't Bayesian reasoning save you from the mugging though? your prior on something like "the mugger will pay me X dollars" can just be much smaller than 1/X for large Xs, so you laugh the mugger off)

mcc

@roywig yes, that's an argument I saw John Baez put forth (that you can resolve the paradox without leaving the Bayesian regime by concluding the probability a person will pay you $N goes down rapidly the higher N is)

However, I am not sure that this resolves my personal, separate criticism of Bayesian reasoning (that it is Calvinball, and highly sensitive to extra-Bayesian reasoning performed before setting up the problem)

William D. Jones

@mcc I am prone to believing in Pascal's Wagers (software bootstrapping maintenance being the big one). But I've never head of Pascal's Mugging lol.

Cassandra Granade

@mcc Why am I not surprised to see Nick Bostrom's name in the first para of the history section, and that he's somehow drawing a connection to xrisk?

Ryan Cavanaugh

@mcc they've failed to even solve the problem in their own terms: the odds that the mugger can provide an ever-larger reward have to decrease proportionally to the size of the reward, too. It's nonsensical to hold the "odds of mugger honoring their promise" constant in the face of larger promised rewards

mcc

@SeaRyanC Yes, this was an entirely sensible argument I saw John Baez make

Arthur wyatt

@mcc I’m beginning to see why cryptocurrency worked on these people.

Chumchum Tumtum

@mcc absurd priors, proponents have ties to Longtermism - checks out

Alex Babis :flag_lesbian_new:

@mcc It's easy to laugh at EY's contrived examples, but lots of people fall for Pascal's Wager and Mugging in practice when it takes the form of religious reward and punishment.

Coming up with models for absurdly big ideas makes a lot more sense when you remember those types of ideas hold sway.

johncarneyau

@mcc Oh god. I tried to read that the other day and had to stop because my brain was trying to escape through my left earhole.

Carl Muckenhoupt

@mcc I don't get it. What does this have to do with Bayes?

Darius Kazemi

@mcc lol this is so silly! Although maybe I could make a living shaking down internet rationalists who believe the probability of increasing rewards in the real world tends toward but never reaches zero

Go Up