Email or username:

Password:

Forgot your password?
mcc

Have you ever noticed how many canonical "paradoxes" just sort of evaporate if you decline to recognize Bayesian inference as a thing that works

78 comments
mcc

Hmm so it looks like you started with some absurd priors and you were able to use them to prove some absurd conclusions. Now you're acting like this is a fundamental challenge to the idea of "rationality" and you've made a wikipedia page. Seems to me like you just selected some absurd priors. At absolute most what you've proven is that game theory kind of sucks

mcc

(This might be kind of vague so this is the kind of thing I'm talking about: en.wikipedia.org/wiki/Pascal%2 A shocking number of problems of this type that make me immediately respond with "why do you think this is a difficult problem?" seem to wind up mentioning Eliezer Yudkowsky when you look into why people are talking about them.)

mcc

INTERNET RATIONALIST: Consider the following thought experiment. Imagine a hyperintelligent artificial intelligence–

ME: No

INTERNET RATIONALIST: What

ME: I am declining to imagine the hyperintelligent artificial intelligence.

INTERNET RATIONALIST:

ME: I'm thinking about birds right now

INTERNET RATIONALIST:

ME: Dozens of crows, perched atop great standing stones

Jonas Wisser

@mcc You know, I already knew Roko's Basilisk was stupid, but for some reason it never occurred to me before now that it's just self-proclaimed rationalists reinventing God and Hell the hard way.

mcc

@jwisser The way I first learned about Bayesian reasoning was in the evolution-vs-intelligent-design-arguments Usenet group. Most of the laziest proofs of the existence of God by internet theists leveraged Bayes' theorem, and could be most easily punctured with the sentence "you selected bad priors". Now a couple decades pass and people who know about Bayes' theorem but not theology are re-inventing "God" from first principles for different reasons, but with very similar bad priors

Luci for dyeing

@mcc @jwisser apparently elon musk and grimes met because they both made roko’s basilisk jokes on twitter

kinda feelin like there’s a roko’s basilisk joke in that somewhere

Rob Hafner :verified_flashing:

@jwisser @mcc oh oh this also applies to the "we're a simulation" people too!

A higher power (literally from a higher dimension) created us in their image, as above so below, and all the other religious fun but with math!

mcc

@tedivm @jwisser right. You can make a probabilistic argument we are almost certainly in a simulation by simply defining a sufficiently arbitrary ensemble

The Banished Dragon Alephwyr

@jwisser @mcc The two concepts don't even make rational sense to conjoin.

chronos

@mcc In my opinion, the real problem with this and other such scenarios really boils down to: just because a "rationalist" can imagine it doesn't mean it's logically consistent with reality.

The most obvious case of this is whenever EY goes on (and on and on, at length) about Newcomb's paradox, which is only possible if you assume that hyperintelligent omniscient beings which can solve the Halting Problem in O(1) time are possible. So much of the Less Wrong dreck falls apart if you know... like... anything about the subjects that EY thinks he's talking about.

In a nutshell, Less Wrong is what happens if you learn everything you know about math, physics, ethics, and philosophy from economists.

@mcc In my opinion, the real problem with this and other such scenarios really boils down to: just because a "rationalist" can imagine it doesn't mean it's logically consistent with reality.

The most obvious case of this is whenever EY goes on (and on and on, at length) about Newcomb's paradox, which is only possible if you assume that hyperintelligent omniscient beings which can solve the Halting Problem in O(1) time are possible. So much of the Less Wrong dreck falls apart if you know... like......

rellik moo

@chronos

Excuse me, I'm just going to pull this up and out on its own to highlight it:

In a nutshell, Less Wrong is what happens if you learn everything you know about math, physics, ethics, and philosophy from economists.

@mcc

Chris Silverman 🌻

@mcc ME: why yes, I am the hyperintelligent artificial intelligence

INTERNET RATIONALIST: Um,

vulp

@mcc personally I would solve AI alignment by asking the malicious genie for three extra wishes

Jacob Harris

@mcc I reached this point with the whole stupid “nuclear bomb that only disarms if you recite a racial slur” thought experiment a few months back. I decline to be part of imagining that is ever plausible or the basis of a serious argument.

RAOF

@mcc This is going to expose me as someone that's spent too much time looking at that stuff, but I particularly like the one that's “What if there was this magical superintelligence that leaves people with two boxes, one of which always contains $10 the other of which reliably, repeatedly, and observably contains either $1,000,000 if you don't open the first box but nothing if you do open the first box” and then has a huge convoluted philosophical argument trying to work out how to make “only open the box you know will contain $1,000,000” the “rational” choice.

Instead of the rather more obvious argument “this thing observably happens, therefore my assumption that it cannot is incorrect”.

@mcc This is going to expose me as someone that's spent too much time looking at that stuff, but I particularly like the one that's “What if there was this magical superintelligence that leaves people with two boxes, one of which always contains $10 the other of which reliably, repeatedly, and observably contains either $1,000,000 if you don't open the first box but nothing if you do open the first box” and then has a huge convoluted philosophical argument trying to work out how to make “only open...

mcc

@RAOF or even the contrapositive (? Did I use that word right) of that argument, "the magic box literally does not exist in reality, because this is a thought experiment, therefore I can make the argument it cannot exist"

RAOF

@mcc That is another perfectly sensible option!

Either you work within the logic of the hypothetical, where the magic box demonstrably exists, and so working out how to rationalise the behaviour of the magic box existing isn't very interesting, or you reject the magic box and the whole thing is void.

Alex Zani

@mcc @RAOF The hypothetical is clearest with the omniscient being, but all the decision-theory-relevant bits still work if you just have a good predictor. (e.g. a friend who knows you well)

Alex Zani

@mcc @RAOF Like, I don't think paradoxes in decision theory are much more than nerdy puzzles, but the supernatural powers they assume just make the hypothetical a bit cleaner. It's not usually required for the point they're making.

серафими многоꙮчитїи

@mcc A lot of the issue is also to do with being deliberately sloppy about 'possible' like, do you mean modality, and if so which one? Or is it some kind of quantification, and if so what are the details? Or is it a simple predicate? - eg doing moral reasoning that treats an imagined 10^80 possible intelligences in cyberheaven as real actual present existing people, which is nonsense.

серафими многоꙮчитїи

@mcc They have to selectively move between meanings of 'possible' at different parts of their argument and their deliberate sloppiness (and logically invalid moves) are disguised with a heavy layer of the aesthetic of logic.

JP

@flaviusb @mcc "aesthetic of logic" is a good way to put it because so much of that world is about performing intelligence, believing in and trying to be superhumanly smart kids who alone can stand against the invented enemy, even-superer-humanly smart computers. just a weird sad intellectual limb to have climbed out onto.

Someone tell me to get up

@jplebreton @flaviusb @mcc it's not exactly the same, but "aesthetic of logic" reminds me of a similar thing called scientism, which is basically the "aesthetic of science". Philosophers apparently like to accuse each other of being scientistic but I've found it a useful concept to have in mind to rebut some arguments that art ultimately just appeals to the authority of the arguer's misunderstanding of science.

Ryusui

@flaviusb @mcc i'm having visions of one of these dingbats trying their "logical thinking" cosplay act and Mr. Spock walking up and slapping them

🍁Maple🍁

@mcc

An old man in a white traveling cloak with vermillion tunic carries a pole over one shoulder to which is tied sheaves of grain. In his free hand is a little iron sickle, and as he walks along a dirt path, two little foxes prance about his heels. One black and one white. One with a key and the other with a gourd full of wine.

Matthew Green

@mcc You are the first person in history who got a rationalist to stop talking.

mcc

@matthew_d_green Note it was not a real rationalist, but a hypothetical rationalist I posited for the sake of a thought experiment, but then again as I understand effective altruists believe hypothetical people posited for thought experiments are individually as valid as real ones so whatever

Matthew Green

@mcc The more hypothetical people you create the better the utilitarian calculus, so you’re doing good.

ZenithAstralis

@mcc me: just having read a book involving pairs of crows who together could very convincingly mimic intelligent thought and speech, yet who were very insistent that they (and everyone else) were in fact just acting intelligent, while not actually being aware whatsoever.
me: "so what if all these crows could talk, and process vast quantities of information, BUT were only interested in shiny objects and bits of food."

Chris [list of emoji]

@mcc

So I read^Wskimmed the Wikipedia page and it seems like if you round any probability below (say) 10**-9 to zero, you stop making these sorts of stupid decisions.

mausmalone

@mcc I mean the whole "give me money and there's an extremely small chance I'll make you rich tomorrow" is the whole premise of the lottery. It's not exactly a farfetched thought experiment.

mcc

@mausmalone So the thing is the reason you can form a meaningful expected value calculation when dealing with the lottery is because you actually do a priori know the government is capable of raising the amounts of money involved in the lottery. It's not a thought experiment in a vacuum, you actually do have to rely on reasoning that isn't part of the logical/mathematical construction

Christine Love

@mcc lmao this is the dumbest thing I've ever read in my life. constructing this argument seriously is a damning indictment of the poster but frankly the fact that Less Wrong posters didn't just laugh in his face is an even worse indictment of their entire forum culture

Christine Love

@mcc The instant they started humouring this they should have been legally required to change the name to More Wrong

Karyl :godot: :procreate:

@christinelove @mcc I have gone down this particular rabbit hole now (I blame both of you) and I have no words for the sheer stupidity involved in the creation of these thought experiments

Karyl :godot: :procreate:

@christinelove @mcc Forget mugging Pascal. Can we mug Eliezer Yudkowsky instead

Rylie ✨💖 Pavlik

@mcc I like math just fine, but thought experiments, like their "non-thought" counterparts, are not without potential harm. I feel like some folks overlook this.

phillmv

@mcc but… why would you take the mugger seriously? this is all very silly, is it that easy to mug effective altruists??

mcc

@phillmv right??! Within the context of the logical framework the problem is phrased in you could phrase your objection as "why are you assigning a non-zero probability to the idea they are telling the truth?" but someone who has no familiarity with the logical/probability tools would still be able to identify "this guy is lying" just using common sense

Dag Ågren ↙︎↙︎↙︎

@mcc @phillmv I mean, you don't even have to invoke the idea of telling the truth or not. At some finite cutoff, the probability of the payoff goes to a big fat zero, just because of physical possibility and living in a finite universe with laws.

mcc

@WAHa_06x36 @phillmv logically it is possible for someone (specifically, Joe Biden and/or Jerome Powell) to give you 10^100 dollars, but also logically following the consequences of a deterministic universe if Mr. Powell were to do that the value (purchasing power) of the dollar would instantly become effectively zero, so it would not be useful to you, and geeeez this stuff is just so obvious, how does one not immediately identify the problem is set up incorrectly

Jonas Wisser

@phillmv @mcc I feel like this question sort of implicitly explains an awful lot about the entire FTX situation.

Roy Wiggins

@mcc you do see people Online deploying a sort of weak pascal's mugging to justify nonsense, like "well sure, the odds that it's a REAL perpetual motion machine are very low, but the benefits would be nearly unlimited so we should hear him out."

(doesn't Bayesian reasoning save you from the mugging though? your prior on something like "the mugger will pay me X dollars" can just be much smaller than 1/X for large Xs, so you laugh the mugger off)

mcc

@roywig yes, that's an argument I saw John Baez put forth (that you can resolve the paradox without leaving the Bayesian regime by concluding the probability a person will pay you $N goes down rapidly the higher N is)

However, I am not sure that this resolves my personal, separate criticism of Bayesian reasoning (that it is Calvinball, and highly sensitive to extra-Bayesian reasoning performed before setting up the problem)

William D. Jones

@mcc I am prone to believing in Pascal's Wagers (software bootstrapping maintenance being the big one). But I've never head of Pascal's Mugging lol.

Cassandra Granade

@mcc Why am I not surprised to see Nick Bostrom's name in the first para of the history section, and that he's somehow drawing a connection to xrisk?

Ryan Cavanaugh

@mcc they've failed to even solve the problem in their own terms: the odds that the mugger can provide an ever-larger reward have to decrease proportionally to the size of the reward, too. It's nonsensical to hold the "odds of mugger honoring their promise" constant in the face of larger promised rewards

mcc

@SeaRyanC Yes, this was an entirely sensible argument I saw John Baez make

Arthur wyatt

@mcc I’m beginning to see why cryptocurrency worked on these people.

Chumchum Tumtum

@mcc absurd priors, proponents have ties to Longtermism - checks out

IoT is the grey goo

@mcc A favorite saying of mine is "Not everything that's logical is reasonable". Bare logic can work on top of any prior assumption.

As for "rationality" I think that term has some baggage. Arguments or world views (or people) are sometimes derided as irrational because they are not aggressive or greedy (...and that may itself indicate ideas like game theory have questionable priors).

Michael Busch

@mcc I make a sharp distinction between actual Bayesian inference, which works just fine, and the caricature of it used by the "rationalist" / "effective altrusim" / LessWrong contingent.

I had a conversation with one such person where he did not seem to understand what a probability distribution function was; but claimed to be doing Bayesian analysis to justify his personal misconceptions about physics & cosmology.

He did not react well to my explaining that that is not how any of this works.

Go Up