@carnage4life I feel obligated to classify the "fancy autocomplete" pejorative as wrong. This is a pernicious attitude but it just doesn't match the facts. For example, one researcher asked GPT-4 questions about how to stack a bunch of physical objects (a book, a stool, a pencil, a beachball) and it gave a reasonable answer. That requires algorithmic reasoning. The neural network must have representations of algorithms.
@carnage4life Another example from my own personal testing: giving it examples of abstract concepts. You can seemingly give it a thousand different examples of say, "optimism", or "justice", all using different words, and it supplies the abstract word that describes the example. Each example is phrased with completely different words.