Have you ever noticed how many canonical "paradoxes" just sort of evaporate if you decline to recognize Bayesian inference as a thing that works
Have you ever noticed how many canonical "paradoxes" just sort of evaporate if you decline to recognize Bayesian inference as a thing that works 78 comments
(This might be kind of vague so this is the kind of thing I'm talking about: https://en.wikipedia.org/wiki/Pascal%27s_mugging A shocking number of problems of this type that make me immediately respond with "why do you think this is a difficult problem?" seem to wind up mentioning Eliezer Yudkowsky when you look into why people are talking about them.) INTERNET RATIONALIST: Consider the following thought experiment. Imagine a hyperintelligent artificial intelligence– ME: No INTERNET RATIONALIST: What ME: I am declining to imagine the hyperintelligent artificial intelligence. INTERNET RATIONALIST: ME: I'm thinking about birds right now INTERNET RATIONALIST: ME: Dozens of crows, perched atop great standing stones @mcc You know, I already knew Roko's Basilisk was stupid, but for some reason it never occurred to me before now that it's just self-proclaimed rationalists reinventing God and Hell the hard way. @jwisser The way I first learned about Bayesian reasoning was in the evolution-vs-intelligent-design-arguments Usenet group. Most of the laziest proofs of the existence of God by internet theists leveraged Bayes' theorem, and could be most easily punctured with the sentence "you selected bad priors". Now a couple decades pass and people who know about Bayes' theorem but not theology are re-inventing "God" from first principles for different reasons, but with very similar bad priors Excuse me, I'm just going to pull this up and out on its own to highlight it: In a nutshell, Less Wrong is what happens if you learn everything you know about math, physics, ethics, and philosophy from economists. @mcc ME: why yes, I am the hyperintelligent artificial intelligence INTERNET RATIONALIST: Um, @mcc I reached this point with the whole stupid “nuclear bomb that only disarms if you recite a racial slur” thought experiment a few months back. I decline to be part of imagining that is ever plausible or the basis of a serious argument. @mcc That is another perfectly sensible option! Either you work within the logic of the hypothetical, where the magic box demonstrably exists, and so working out how to rationalise the behaviour of the magic box existing isn't very interesting, or you reject the magic box and the whole thing is void. @mcc A lot of the issue is also to do with being deliberately sloppy about 'possible' like, do you mean modality, and if so which one? Or is it some kind of quantification, and if so what are the details? Or is it a simple predicate? - eg doing moral reasoning that treats an imagined 10^80 possible intelligences in cyberheaven as real actual present existing people, which is nonsense. @mcc They have to selectively move between meanings of 'possible' at different parts of their argument and their deliberate sloppiness (and logically invalid moves) are disguised with a heavy layer of the aesthetic of logic. @flaviusb @mcc "aesthetic of logic" is a good way to put it because so much of that world is about performing intelligence, believing in and trying to be superhumanly smart kids who alone can stand against the invented enemy, even-superer-humanly smart computers. just a weird sad intellectual limb to have climbed out onto. @jplebreton @flaviusb @mcc it's not exactly the same, but "aesthetic of logic" reminds me of a similar thing called scientism, which is basically the "aesthetic of science". Philosophers apparently like to accuse each other of being scientistic but I've found it a useful concept to have in mind to rebut some arguments that art ultimately just appeals to the authority of the arguer's misunderstanding of science. An old man in a white traveling cloak with vermillion tunic carries a pole over one shoulder to which is tied sheaves of grain. In his free hand is a little iron sickle, and as he walks along a dirt path, two little foxes prance about his heels. One black and one white. One with a key and the other with a gourd full of wine. @matthew_d_green Note it was not a real rationalist, but a hypothetical rationalist I posited for the sake of a thought experiment, but then again as I understand effective altruists believe hypothetical people posited for thought experiments are individually as valid as real ones so whatever @mcc The more hypothetical people you create the better the utilitarian calculus, so you’re doing good. @mcc me: just having read a book involving pairs of crows who together could very convincingly mimic intelligent thought and speech, yet who were very insistent that they (and everyone else) were in fact just acting intelligent, while not actually being aware whatsoever. So I read^Wskimmed the Wikipedia page and it seems like if you round any probability below (say) 10**-9 to zero, you stop making these sorts of stupid decisions. @mcc I mean the whole "give me money and there's an extremely small chance I'll make you rich tomorrow" is the whole premise of the lottery. It's not exactly a farfetched thought experiment. @mausmalone So the thing is the reason you can form a meaningful expected value calculation when dealing with the lottery is because you actually do a priori know the government is capable of raising the amounts of money involved in the lottery. It's not a thought experiment in a vacuum, you actually do have to rely on reasoning that isn't part of the logical/mathematical construction @mcc lmao this is the dumbest thing I've ever read in my life. constructing this argument seriously is a damning indictment of the poster but frankly the fact that Less Wrong posters didn't just laugh in his face is an even worse indictment of their entire forum culture @mcc The instant they started humouring this they should have been legally required to change the name to More Wrong @christinelove @mcc I have gone down this particular rabbit hole now (I blame both of you) and I have no words for the sheer stupidity involved in the creation of these thought experiments @christinelove @mcc Forget mugging Pascal. Can we mug Eliezer Yudkowsky instead @mcc I like math just fine, but thought experiments, like their "non-thought" counterparts, are not without potential harm. I feel like some folks overlook this. @phillmv right??! Within the context of the logical framework the problem is phrased in you could phrase your objection as "why are you assigning a non-zero probability to the idea they are telling the truth?" but someone who has no familiarity with the logical/probability tools would still be able to identify "this guy is lying" just using common sense @WAHa_06x36 @phillmv logically it is possible for someone (specifically, Joe Biden and/or Jerome Powell) to give you 10^100 dollars, but also logically following the consequences of a deterministic universe if Mr. Powell were to do that the value (purchasing power) of the dollar would instantly become effectively zero, so it would not be useful to you, and geeeez this stuff is just so obvious, how does one not immediately identify the problem is set up incorrectly @mcc you do see people Online deploying a sort of weak pascal's mugging to justify nonsense, like "well sure, the odds that it's a REAL perpetual motion machine are very low, but the benefits would be nearly unlimited so we should hear him out." (doesn't Bayesian reasoning save you from the mugging though? your prior on something like "the mugger will pay me X dollars" can just be much smaller than 1/X for large Xs, so you laugh the mugger off) @roywig yes, that's an argument I saw John Baez put forth (that you can resolve the paradox without leaving the Bayesian regime by concluding the probability a person will pay you $N goes down rapidly the higher N is) However, I am not sure that this resolves my personal, separate criticism of Bayesian reasoning (that it is Calvinball, and highly sensitive to extra-Bayesian reasoning performed before setting up the problem) @mcc I am prone to believing in Pascal's Wagers (software bootstrapping maintenance being the big one). But I've never head of Pascal's Mugging lol. @mcc Why am I not surprised to see Nick Bostrom's name in the first para of the history section, and that he's somehow drawing a connection to xrisk? @mcc they've failed to even solve the problem in their own terms: the odds that the mugger can provide an ever-larger reward have to decrease proportionally to the size of the reward, too. It's nonsensical to hold the "odds of mugger honoring their promise" constant in the face of larger promised rewards @mcc A favorite saying of mine is "Not everything that's logical is reasonable". Bare logic can work on top of any prior assumption. As for "rationality" I think that term has some baggage. Arguments or world views (or people) are sometimes derided as irrational because they are not aggressive or greedy (...and that may itself indicate ideas like game theory have questionable priors). @mcc I make a sharp distinction between actual Bayesian inference, which works just fine, and the caricature of it used by the "rationalist" / "effective altrusim" / LessWrong contingent. I had a conversation with one such person where he did not seem to understand what a probability distribution function was; but claimed to be doing Bayesian analysis to justify his personal misconceptions about physics & cosmology. He did not react well to my explaining that that is not how any of this works. |
Hmm so it looks like you started with some absurd priors and you were able to use them to prove some absurd conclusions. Now you're acting like this is a fundamental challenge to the idea of "rationality" and you've made a wikipedia page. Seems to me like you just selected some absurd priors. At absolute most what you've proven is that game theory kind of sucks