Email or username:

Password:

Forgot your password?
Top-level
Karsten Johansson

@artemesia @chickfilla @nixCraft That depends on if it can differentiate between code people thought was good, and code people thought was rubbish.

If it focuses only on code that got a lot of upvotes and participation (and then uses some of the points given in the participation) then it shouldn't be any worse than it already is.

Not to say that its bad... but if it hallucinates, it would mostly be because of the extremely high number of bad answers (that thankfully tend to be downvoted and sometimes explained as to why they are bad answers)

Still, the point is pretty clear that those who came up with the good content probably won't get credit, and there would be no reason to bother answering questions there for future questions.

1 comment
Artemesia

@ksaj @chickfilla @nixCraft

Even if we grant your postulates, what value is being created? It would just be regurgitating previous highly voted answers. That doesn't create any capability to produce useful answers to questions without prior history. There's no guarantee that an LLM answer would even be syntactically correct. You also underestimate LLM "AI" propensity to "hallucinate" (or to be blunt, "make shit up so it can respond"). The AI starts from the position that it *will* produce a reply following a certain format, then if it can't find real world priors it invents them. Quite a few attorneys have gotten into deep shit by filing AI generated court docs that referenced prior cases that simply did not exist.

Not to mention the sucking their own exhaust problem when later AIs train themselves up on the garbage produced by earlier AIs.

@ksaj @chickfilla @nixCraft

Even if we grant your postulates, what value is being created? It would just be regurgitating previous highly voted answers. That doesn't create any capability to produce useful answers to questions without prior history. There's no guarantee that an LLM answer would even be syntactically correct. You also underestimate LLM "AI" propensity to "hallucinate" (or to be blunt, "make shit up so it can respond"). The AI starts from the position that it *will* produce a reply...

Go Up