Email or username:

Password:

Forgot your password?
Top-level
RAOF

@mcc This is going to expose me as someone that's spent too much time looking at that stuff, but I particularly like the one that's “What if there was this magical superintelligence that leaves people with two boxes, one of which always contains $10 the other of which reliably, repeatedly, and observably contains either $1,000,000 if you don't open the first box but nothing if you do open the first box” and then has a huge convoluted philosophical argument trying to work out how to make “only open the box you know will contain $1,000,000” the “rational” choice.

Instead of the rather more obvious argument “this thing observably happens, therefore my assumption that it cannot is incorrect”.

8 comments
mcc

@RAOF or even the contrapositive (? Did I use that word right) of that argument, "the magic box literally does not exist in reality, because this is a thought experiment, therefore I can make the argument it cannot exist"

RAOF

@mcc That is another perfectly sensible option!

Either you work within the logic of the hypothetical, where the magic box demonstrably exists, and so working out how to rationalise the behaviour of the magic box existing isn't very interesting, or you reject the magic box and the whole thing is void.

Alex Zani

@mcc @RAOF The hypothetical is clearest with the omniscient being, but all the decision-theory-relevant bits still work if you just have a good predictor. (e.g. a friend who knows you well)

Alex Zani

@mcc @RAOF Like, I don't think paradoxes in decision theory are much more than nerdy puzzles, but the supernatural powers they assume just make the hypothetical a bit cleaner. It's not usually required for the point they're making.

mcc

@AlexandreZani @RAOF Well… no, I think I'd argue the supernatural powers *are* necessary, because if the predictor is like a really good friend then suddenly I have to start asking questions like, *is* there any person on earth who I've revealed enough of myself to they can predict how I'd behave in extreme situations, and suddenly I'm judging against "how well do they know me" and not the probabilities the thought experiment is supposed to be about.

mcc

@AlexandreZani @RAOF And if the assigning the weights on the inputs to the probability function that comprises the thought experiment turns out to be a harder problem than executing the logic of the probability function itself, then… isn't what the thought experiment has ultimately shown, is that the probability function isn't useful?

Because that was my point to start with– if we're allowed to bring "this entire methodology seems to be working kind of poorly" in as a possibility…

mcc

@AlexandreZani @RAOF …Well, then some of the hard parts get easy!

Avram Grumer

@RAOF @mcc That sounds like Newcomb’s Paradox, and it’s older than Yudkowsky is.

en.wikipedia.org/wiki/Newcomb%

Go Up