Guessing the next word without understanding the meaning of the resulting sentence makes unsupervised LLMs unsuitable for high-stakes tasks. The whole AI bubble is based on convincing investors that one or more of the following is true:
I. There are low-stakes, high-value tasks that will recoup the massive costs of AI training and operation;
II. There are high-stakes, high-value tasks that can be made cheaper by adding an AI to a human operator;
4/
III. Adding more training data to an AI will make it stop hallucinating, so that it can take over high-stakes, high-value tasks without a "human in the loop."
5/