@nixCraft, it may be soon, but by around 2040, we'll probably use voice (or subvocal speech, some kind of Neuralink, etc.) to issue commands through a large language model (LLM) agent. Essentially, it will still be the same command line interface, but with a much faster 'typing' interface, and our commands will be at a higher level of abstraction. Teaching the agent to perform specific tasks will be done by experts, much like how they currently write commands or snippets.