Email or username:

Password:

Forgot your password?
Top-level
raganwald 🍓

@moirearty @FediThing @nixCraft

First, it’s just fun. This isn’t an academic paper. And ChatGPT is a product, it’s ok to dunk on products that were launched to the public in shitty form.

This particular thing has been “fixed”in ChatGPT 4. The open question is, “What hasn’t been fixed, because it’s not popular enough for people to have discovered the error yet?”

7 comments
FediThing 🏳️‍🌈

@raganwald @moirearty @nixCraft

I tested it on an actual deployed version of ChatGPT currently available to the public on a major mainstream website (I'm not giving it free publicity by linking though). It still gives this result.

And yes, the main point isn't whether this particular thing has been fixed, but the fact that the makers had to make this correction, presumably manually.

AI/LLM is being irresponsibly mis-sold as a replacement for expertise, when all it does is bullshit about stuff it has heard about but has no actual knowledge of. It's going to corrupt and degrade society if we start relying on it for information.

@raganwald @moirearty @nixCraft

I tested it on an actual deployed version of ChatGPT currently available to the public on a major mainstream website (I'm not giving it free publicity by linking though). It still gives this result.

And yes, the main point isn't whether this particular thing has been fixed, but the fact that the makers had to make this correction, presumably manually.

FediThing 🏳️‍🌈

@raganwald @moirearty @nixCraft

So no, the commentary is not at all wrongheaded. The commentary is totally apt.

ChatGPT literally does not know what it is talking about, and we need to stop treating it like it has any value beyond entertainment or niche studies of linguistics.

moirearty

@FediThing @raganwald @nixCraft I agree from one point of view on this, Generative AI should not be getting the insane funding and roll-out / shoved down everyone’s throat it is exactly because of what you’re saying.

I’m not a proponent of this technology overall and think it’s a useful tool in limited circumstances alone, and that any march toward “AGI” using similar technology is absolutely a lie and they’re more or less hopping they figure something out.

moirearty

@FediThing @raganwald @nixCraft However, people also think a manual correction was made which I’m reasonably sure is not true.

The tech, as oversold as it is, did get better. There are limited but useful reasoning capabilities, it is not just a “stochastic parrot” as the early versions absolutely were etc.

IMO some technologists won’t keep up with this field because they wrote it off (for good reason) and am not arguing in favor of a business case, but we will be dealing with it for years.

moirearty

@FediThing @raganwald @nixCraft from a product standpoint I am in full agreement and think these companies should be rightly criticized.

From a Computer Science point of view, I believe some of the smartest folks are falling into the trap of “the thing I looked at previously was horrible so I’ve written it of entirely” and will be dogmatic about it to the detriment of their own field.

There is an area between “AGI” (which is bs until breakthroughs tbd) and where we are now that will be useful.

raganwald 🍓

@moirearty @FediThing @nixCraft

Tons of useful applications for ANI today, just as Newton and Macintosh 128K had use cases.

raganwald 🍓

@moirearty @FediThing @nixCraft

What we know is that there is a way to report errors, and they do use the error reports to guide workers who train the model.

There is also the possibility that the model itself has improved in a way that corrects this error without needing humans to focus training on it.

Either way, this seems like a product from a company that is asking the world to beta-test it in production, and simultaneously, it’s a product where we cannot have a “complete test suite.”

Go Up