Email or username:

Password:

Forgot your password?
Top-level
Hugo Mills

@cstross ... but how many human answers to programming questions are wrong?

(OK, probably not 52%, but I bet you it's higher than you first thought...)

8 comments
Escaping Galt's Gorean Gulch

@darkling @cstross the thing is that we're being sold the lie that 99.9% of the answers from the glorified logistic regression are correct.

And that 0.1 is still big enough to kill billions of people.

Cheradenine Zakalwe

@darkling@mstdn.social @cstross@wandering.shop It's the wrong question. The correct question is, "What proportion of answers to programming questions given by programmers who understand the language and the question are wrong."

Ask almost ANY question of someone who doesn't actually understand the question or the subject, and the answer you receive is overwhelmingly likely to be wrong. (This is doubly true in America, where it seems to be considered almost a mortal sin to ever be heard to say "I don't know".)

Cybarbie

@darkling @cstross Indeed I wonder what the StackOverflow accepted answer fail rate is. It's quite subjective. Usually the second or third answer on SO is the correct one, the first usually being the product of some diseased brain that doesn't do real work.

mathew

@darkling @cstross An important difference is that on Stack Overflow, volunteers will usually have posted corrections.

Whereas with ChatGPT, people turn up in forums to ask someone else to do the work for them of determining whether the slop from the bot is correct or not.

unlucio 🌍 :mastodon:

@darkling @cstross I'd argue that if you hire a software engineer and they're wrong 52% of the time, that wasn't a good hire.

sabik

@unlucio @darkling @cstross
If you hire a software engineer and they're wrong 52% of the time, they may still be an excellent hire if they're a good learner, open to feedback, conscientious, etc

ChatGPT is not, it has no facility to learn or handle feedback beyond the session (if that), nothing

sabik

@unlucio @darkling @cstross
If I spend longer than it would have taken me to do it myself helping a junior engineer through a problem, I've helped them grow, to the benefit of them and the team

If I spend longer than it would have taken me to do it myself helping ChatGPT through a problem, I've wasted my time

Jargoggles

@darkling @cstross
The critical difference, something I don't think I saw anyone mention in this thread, is that human beings understand how to say "I don't know."

An LLM is an even worse version of some asshole that weighs in on *everything* and asserts wrong answers just as confidently as right answers

Go Up