Tell me again how #GenAI will extract meaningful trends from and answer queries about your data set.
Tell me again how #GenAI will extract meaningful trends from and answer queries about your data set. 101 comments
Lars Marowsky-Brée 😷, and for this reason, I won't understand for the life of me how could someone seriously use an LLM as a tool. Or instead of a proper search engine. @grishka They're tools for when you need an answer that might not be fully correct, eg brain storming, rubber ducking, or even quite a few translations. This also makes perfect sense, because context matters - and once it generated a wrong answer, it is human enough to double down on it! The singularity is near! You've got to ask it "nicely" right from the start. Don't embarrass it! I AM A PROMPT ENGINEER @pitch I think it's the best thing that ever happened to software engineers, suddenly no one makes fun of *us* anymore for not being an actual engineering discipline. @larsmb I will still ❤️ Promised 😉 Even though i am officially credited as a software engineer in multiple projects. I think there is ways to be a software engineer. Just most of the programmers are not even developers and far from being software engineers. Just writing stupid code is by no means an engineering feat, but systematically designing a software, evaluating different approaches and laying out an efficient order of operations can be an engineering process. @larsmb don't worry, these issues just keep getting fixed quickly after being reported and the product keeps improving... or does it? https://community.openai.com/t/incorrect-count-of-r-characters-in-the-word-strawberry/829618 @FurryBeta I mean, as an engineer, I spent a lot of my time arguing with sales/CxOs over factually true statements, so Yeah, it does seem as though the best application for this kind of tech is not replacing programmers, but replacing corporate bullshitters. Bullshit is the only thing this machine is capable of, and it's very, very good at it. So good at bullshit, in fact, that it's already convinced all of the human corporate types that it's the best thing money can buy! @fartnuggets @larsmb @msbw It’s all yours. I still have hope that open-sourced AI agents will be useful, but I’m personally done with trying to wrangle the big commercial LLMs into anything useful. I’ve yet to come across a real-world problem I can’t solve quicker with a Jupyter notebook and a couple Python libraries @fartnuggets @larsmb @msbw I’m hoping for a future where a trusted on-device agent can basically act as a personal assistant. I think it needs some ability to learn and make decisions, but not this weird “boil the ocean” strategy behind LLMs. Kinda reminds me of robotics in the early 90s - after decades of failed top-down approaches, we finally found huge success with drastically simpler ensemble bottom-up approaches exemplified by the Genghis family @larsmb It feels like there were definitely some Monty Python skits in the training data. @nini See the update, the most human thing it does is double down on a wrong answer So, this exchange burned way more wattage than a simple letter-counting algorithm would have, and it gave a blatantly incorrect answer. “AI” is going just great. @argv_minus_one @larsmb @samhainnight Is it me, or *does the AI also have the tone of someone on reddit who is sure they are *very right* @larsmb yeah this tech DEFINITELY is worth all the resources it gobbles up. We have PLENTY of spare water and power. @Crystal_Fish_Caves Exactly! It clearly should be the top priority for all businesses and politicians, it is *the best*. @larsmb it's like trying to convince a conservative or centrist of literally anything involving scientific evidence! @larsmb @RealGene @larsmb likely it just means it makes something up, as there are unlikely to be data for all numbers / words and they are also very similar. Sometimes it will actually use python code in the background which gets the right answer. (Which in other cases has hilarious results; if you as ChatGPT (free version) to draw a sketch of sth, it will create python code to draw various lines and circles and show you the output, which in no way resemble anything.) @larsmb This post may include irony @larsmb @vampirdaddy The (supposed) idea behind them though is that with enough context and tokens, they can infer "some" logic from language encoded in their models. And it _might_ even one day work, but it ... definitely doesn't yet. @larsmb Their programming does not allow anything else. @vampirdaddy The idea seems to be that the very large data set allows them to encode a certain level of "reasoning and understanding", and thus correctly predict the next words given the current context. That ... might even work eventually. The point is that even one of the currently largest and most advanced models can't do it (yet?) for a rather trivial task. But please don't reply with very basic fundamentals as one liners, which comes across as somewhat condescending :) Thanks! @larsmb The current models ingested presumably >90% of all internet-available texts. Thus the presumed needed order of magnitude simply won’t exist ever. Plus as the algorithm only picks probable next words, it won’t deduce. It won’t learn, as neural nets usually have to (more or less) completely be te-trained for each "learning" step, still without understanding. @larsmb Yes, LLMs make you sorely aware of the sloppiness of our speech. I suspect ChatGPT is confused because there is a “double r” in “strawberry”, and LLM correctly associates “double” with “two". A human might also tell you to write two R's in strawberry, intending to warn you about the double R in berry. I think this LLM works as designed. Sadly, some people want to ignore what LLMs (and natural languages) actually are. @ptesarik I'm pretty sure it works "as designed" (as much as anyone actually understands how to "design" LLMs), but probably not as intended. @chbmeyer Sure, but for me, understanding and experiencing what the systems can (not) (yet or ever) do is part of my job. @larsmb …this sounded like a genuine argument with a real life gammon rather than an AI @larsmb Yikes, and so easily reproducible, too. The explanation that was generated for me is also top tier. @oliversampson It's how most of social media still feels about #Covid19, climate collapse, the rise of the right, ... @larsmb @larsmb this looks more like youtube comments (where it "learns" from) @larsmb The robot judge has sentenced all letters of the word to be carried out consecutively, except for one of the R:s in 'berry' and the R in 'straw', which are to be carried out concurrently. @larsmb Why produce so much CO2 and use precious water for such unreadable junk? @larsmb and 3 will become 2. @larsmb Don‘t blame the AI for that you didn‘t ask the question in a way that 42 being the answer makes sense … 🙈 @larsmb Initially you asked for the number of „r“s without giving a scope. Phonetically it only has two. Without a spelling scope defined you are both right. However testing this theory failed as hard as it possibly could. 🤦 @larsmb I can see why techbros and billionaires love this shit. It is obstinately and determinedly convinced it is correct even in the face of all that contrary evidence. @larsmb These LLMs can't see individual letters of common words. That's probably the main reason why they can't always count them correctly. This tool visualizes how OpenAI's models see text: https://platform.openai.com/tokenizer But being sometimes wrong wouldn't be that much a problem if these models weren't trained pretty explicitly to just deceive. Fake it until you make a superhuman bullshitter. @larsmb If people who train these models were honestly trying to make something that values truth over impressive marketing, their LLMs would avoid using even language that suggests they may have agency, identity, ability to reflect, self-consciousness, etc. Unless they can prove that they have. @larsmb Thats exactly my pain from the "AI helpers" I have to work with. Basically I use them to create markdown tables. Thats it. Everything else would create more work for me. |
I can also see this going great for coding, programming languages and computers are known to be very forgiving and tolerant