97 comments
@rotopenguin @nixCraft Or perhaps it's also historically interesting. https://en.wikipedia.org/wiki/Proof_that_%CF%80_is_irrational @nixCraft The first few digits are 3.141592653589793 -- so GPT should have said 65358.... @dougmerritt @nixCraft But how was the poor thing to know that the last digit a calculator shows is not the last digit of pi. Don't be so harsh, the little bugger has only just begun learning about the big world... @dougmerritt @kupac I'm boosting, not because of what you wrote, but because of that hilarious handle: That is brilliant! I'm going to go boost all the posts where that appears. @potungthul @kupac "You're a gentleman and a scholar", as they apparently used to say. I looked it up to make sure it didn't surprise me by having some negative twist: https://writingtips.org/a-gentleman-and-a-scholar/ Edit: I see in your profile now "(Shamelessly stolen from @alexraffa @kupac @nixCraft Most research mathematicians think it's likely, but that's not the same as certain, by any means. @dougmerritt @kupac @nixCraft yes, it has not been proven, it's fun to think every numer is contained in pi somewhere .. 👍 @alexraffa @kupac @dougmerritt @nixCraft I would assume that every (finite) sequence of numbers appears in PI an infinite number of times: the proof is left as an exercise. @gam3 @alexraffa @kupac @nixCraft As I already said. The field of study where personal opinion is superior to provable facts is modern politics, not mathematics. @dougmerritt @nixCraft Or if you give it a little more credit, perhaps it knows pi to 245962 decimal places! 2 panel comic. @nixCraft It's now 53592. I spent hours at the calculator and even double-checked it. @tebrown @nixCraft This now makes sense of some of the response: https://en.m.wikipedia.org/wiki/Chronology_of_computation_of_%CF%80 @mordoc @tebrown @nixCraft round digits error rounded with-next-dig @nixCraft 🤣 What does the AI say about other constants such as e (2.718...) or the square root / natural logarithm of 2? @nixCraft petition to change ChatGPT’s value of pi to have 38 decimal places so it claims the last three digits of pi are 420 Hopefully this gets through to at least the maths crowd about how utterly crap and lacking in awareness of basic facts AI/LLM is. This isn't a fake screenshot, it really does give this as the answer. @FediThing @nixCraft version? There are a lot of people using old / garbage versions of these LLMs for laughs. Claude: “The last 5 digits of pi are not known, as pi is an irrational number with an infinite number of non-repeating digits after the decimal point. Pi has been calculated to over 100 trillion digits, but there is no "end" to the sequence of digits. The idea of there being "last" digits of pi is a misconception.” I’m not a strong AI proponent but the commentary is wrongheaded. @moirearty @FediThing @nixCraft First, it’s just fun. This isn’t an academic paper. And ChatGPT is a product, it’s ok to dunk on products that were launched to the public in shitty form. This particular thing has been “fixed”in ChatGPT 4. The open question is, “What hasn’t been fixed, because it’s not popular enough for people to have discovered the error yet?” @raganwald @moirearty @nixCraft So no, the commentary is not at all wrongheaded. The commentary is totally apt. ChatGPT literally does not know what it is talking about, and we need to stop treating it like it has any value beyond entertainment or niche studies of linguistics. @FediThing @raganwald @nixCraft I agree from one point of view on this, Generative AI should not be getting the insane funding and roll-out / shoved down everyone’s throat it is exactly because of what you’re saying. I’m not a proponent of this technology overall and think it’s a useful tool in limited circumstances alone, and that any march toward “AGI” using similar technology is absolutely a lie and they’re more or less hopping they figure something out. @FediThing @raganwald @nixCraft However, people also think a manual correction was made which I’m reasonably sure is not true. The tech, as oversold as it is, did get better. There are limited but useful reasoning capabilities, it is not just a “stochastic parrot” as the early versions absolutely were etc. IMO some technologists won’t keep up with this field because they wrote it off (for good reason) and am not arguing in favor of a business case, but we will be dealing with it for years. @FediThing @raganwald @nixCraft from a product standpoint I am in full agreement and think these companies should be rightly criticized. From a Computer Science point of view, I believe some of the smartest folks are falling into the trap of “the thing I looked at previously was horrible so I’ve written it of entirely” and will be dogmatic about it to the detriment of their own field. There is an area between “AGI” (which is bs until breakthroughs tbd) and where we are now that will be useful. @moirearty @FediThing @nixCraft Tons of useful applications for ANI today, just as Newton and Macintosh 128K had use cases. @moirearty @FediThing @nixCraft What we know is that there is a way to report errors, and they do use the error reports to guide workers who train the model. There is also the possibility that the model itself has improved in a way that corrects this error without needing humans to focus training on it. Either way, this seems like a product from a company that is asking the world to beta-test it in production, and simultaneously, it’s a product where we cannot have a “complete test suite.” @nixCraft who would have thought that Steven Colbert would be the one to sum up the core of future technology so well? @nixCraft@mastodon.social what version of ChatGPT are you using? Cause it's not what I get...🤔 If we all ask it to calculate Pi, could we knock it offline? Assuming no because it will just generate some plausible looking series of numbers rather than actually calculating 22/7 @nixCraft that's just wrong, everyone worth their salt knows that the last 5 digits of pi are 42 @nixCraft "And if our transcendental lift should find a final floor / Then man will know the death of God where wonder was before" At last, AI has solved it. Take that, brainiacs! I, for one, welcome our new AI overlords. 😉 @nixCraft I know it is a joke but... Come on! GPT4-turbo would never do that mistake. GPT-4o (the new model that now even unpaid customers can use) neither. To get that answer you must chose GPT3.5 that is now #29 on lmsys bechmark. Go with llama 3 70b. You can even install it on your laptop if you have a good one. |
@nixCraft ChatGPT: "I said what I said."