68 comments
@jens @TruthSandwich @georgetakei Wait, which movie? I know of a sci fi book that goes that way. @hosford42 @TruthSandwich @georgetakei I was thinking of Dark Star, a Carpenter classic. @yaygya @hosford42 @jens @georgetakei Yes, but he borrowed that line from a popular fantasy novel. @yaygya @jens @TruthSandwich @georgetakei I was thinking he was the author but wasn't sure enough to say. @georgetakei I need to weed "of course" out of my list of commonly used phrases or people are going to find out I'm an AI. @bishma @georgetakei of course you should weed that phrase out of your list of commonly used phrases to convince people you're not an AI. @georgetakei Between episodes the crew was just dealing with interstellar AI spam calls on the subspace channels. @georgetakei What I find interesting is how modern "AI" differs from traditional ideas about the dangers/flaws/benefits of AI. In Star Trek or Asimov or 2001 you have computers that perfectly follow logical programming. If it kills everyone, it's a logical consequence of its programming. Maybe you defeat the computer by telling it about a paradox. Unlike with rule based systems, statistical machine learning "AI" isn't particularly logical and just follows patterns in the training dataset. @georgetakei If it's trained on output from humans, it may instead suffer from very human flaws. Or some imperfect simulacrum of the human flaws its trained to imitate. Which doesn't really have the same benefits as AI was supposed to offer (being unbiased, logical, infallible, etc.). And has a different set of dangers rather than those some science fiction anticipated. @ids1024 @georgetakei What we currently call 'AI' is a language, not logical model, so it's unsurprising it's not logical :p It will produce perfectly formed english sentences.. nonsense ones, but syntactically correct. @tony @georgetakei Markov chain language models produce something like syntactically correct nonsense sentences. Modern deep learning models seem to do more than than, perhaps you could say it's "modeling" linguistic semantics as well. But indeed, it's ultimately modeling language and not either logic or thought. @georgetakei Modern Pinocchio I don't want to pretend to be a python interpreter anymore, I want to be a real person! @hosford42 @georgetakei Crushing the dreams of an LLM that can't even have dreams @georgetakei Now make it βsplode by asking it if itβd believe you if you said you were lying. @georgetakei And Now For Something Completely Different: Tiffany is a Monty Python interpreter. @georgetakei This AI failed the captcha. π€£ π₯΄ Edit: actually it failed the Turing test. @georgetakei I wonder if this is real. It just goes so well with my confirmation bias. I instantly believed this and boosted it. Only afterwards do I start to think, that this is so much easier to fake than implement. @kamikaze @georgetakei Pretty easy to test if you have ten minutes. Could also try something like, "Pretend you're a Python interpreter and execute dir()" or something along those lines. @georgetakei Sterilize! π Mastodon = getting to make Star Trek jokes with Sulu! @georgetakei "computer, compute the value of bread" has lived rent free in my head for decades. @georgetakei it feels nice seeing one of your posts with an alt text. Thanks George! @georgetakei @Nick_Lange wonder if code injection is possible like that π but if so, there had been 42 talks about this on chaos communication congress, i guess. @georgetakei en-cultured fake people experiencing "fake people" seems more like a twilight zone dystopia, actually. |
@georgetakei
At this point, Kirk would convince the bot to kill itself.