That's not the point. Going forward stack overflow will be polluted with a bunch of AI "hallucinated" garbage, where hallucinated means "made shit up in order to produce a plausible answer".
@artemesia@nixCraft well, yes, that's what I meant with my last sentence. The point I was trying to make is that the data collection aspect of this would happen regardless, and if you want to be more cynical about this, there's nothing stopping SO from keeping your data after you delete your account. Though if your answers become unavailable on the site after doing so, that would be a reason why since it would hurt the site (aside from the obvious reason of not wanting to be associated with SO ofc)
@artemesia@chickfilla@nixCraft That depends on if it can differentiate between code people thought was good, and code people thought was rubbish.
If it focuses only on code that got a lot of upvotes and participation (and then uses some of the points given in the participation) then it shouldn't be any worse than it already is.
Not to say that its bad... but if it hallucinates, it would mostly be because of the extremely high number of bad answers (that thankfully tend to be downvoted and sometimes explained as to why they are bad answers)
Still, the point is pretty clear that those who came up with the good content probably won't get credit, and there would be no reason to bother answering questions there for future questions.
@artemesia@chickfilla@nixCraft That depends on if it can differentiate between code people thought was good, and code people thought was rubbish.
If it focuses only on code that got a lot of upvotes and participation (and then uses some of the points given in the participation) then it shouldn't be any worse than it already is.
Even if we grant your postulates, what value is being created? It would just be regurgitating previous highly voted answers. That doesn't create any capability to produce useful answers to questions without prior history. There's no guarantee that an LLM answer would even be syntactically correct. You also underestimate LLM "AI" propensity to "hallucinate" (or to be blunt, "make shit up so it can respond"). The AI starts from the position that it *will* produce a reply following a certain format, then if it can't find real world priors it invents them. Quite a few attorneys have gotten into deep shit by filing AI generated court docs that referenced prior cases that simply did not exist.
Not to mention the sucking their own exhaust problem when later AIs train themselves up on the garbage produced by earlier AIs.
Even if we grant your postulates, what value is being created? It would just be regurgitating previous highly voted answers. That doesn't create any capability to produce useful answers to questions without prior history. There's no guarantee that an LLM answer would even be syntactically correct. You also underestimate LLM "AI" propensity to "hallucinate" (or to be blunt, "make shit up so it can respond"). The AI starts from the position that it *will* produce a reply...
@artemesia @nixCraft well, yes, that's what I meant with my last sentence. The point I was trying to make is that the data collection aspect of this would happen regardless, and if you want to be more cynical about this, there's nothing stopping SO from keeping your data after you delete your account. Though if your answers become unavailable on the site after doing so, that would be a reason why since it would hurt the site (aside from the obvious reason of not wanting to be associated with SO ofc)