Even if we grant your postulates, what value is being created? It would just be regurgitating previous highly voted answers. That doesn't create any capability to produce useful answers to questions without prior history. There's no guarantee that an LLM answer would even be syntactically correct. You also underestimate LLM "AI" propensity to "hallucinate" (or to be blunt, "make shit up so it can respond"). The AI starts from the position that it *will* produce a reply following a certain format, then if it can't find real world priors it invents them. Quite a few attorneys have gotten into deep shit by filing AI generated court docs that referenced prior cases that simply did not exist.
Not to mention the sucking their own exhaust problem when later AIs train themselves up on the garbage produced by earlier AIs.