@JoeUchill it's funny because it's consuming it's own bullshit
16 comments
@SETSystems then again, better to try and get people to mistrust it now. I showed it to my kid. Critical Thinking. @SETSystems damn. That abstract is hopeful. Once kids are shown how LLM's are kind of bad with information they tend to trust them less, quickly. @lightninhopkins @SETSystems a lie is intentional. It's human. LLM's don't lie. It's just a program. If there is an intentional lie a person prompted the LLM to do it. @SETSystems "Currently the people creating some of these models are aware that the system has a capacity to "misrepresent itself". All of them know that it does. Maybe not the sales folks. My concerns are less esoteric. More immediate as LLM's trained on the internet are shoved into google. @SETSystems Its kinda funny when you are told to put glue on pizza or cook with gasoline. "Haha, funny LLM". It gets less funny fast when you have a depressed person asking about options. I went off on a tangent there. @lightninhopkins @SETSystems @lightninhopkins @SETSystems @lightninhopkins @JoeUchill @laurailway @SETSystems @lightninhopkins @JoeUchill I once saw a LLM generated solution list of how to fix certain printing problems, and the 12 item list was totally fine for the first 6 or so items, to turn totally freaky destructively wrong from then on (as in replace hardware for an obvious software issue). These might even catch experts, as you don't always read the whole answer in detail. |
@lightninhopkins @JoeUchill
Actually that is quite concerning. Synthetic media feeding back on itself has a potential to magnify distortions. This is going to get people killed.