@mkj @eniko
Yes! I don't see how an LLM can reason.
I tried feeding a Logic Puzzle from a grocery store puzzle book into CoPilot. I thought that might be a good minimum threshold to show "reasoning," assuming that puzzle doesn't exist online.
It did very poorly, with the response not actually making sense. A friend put it in ChatGPT-4, and it did better -- solving 2 categories but getting the third wrong.
What do you think of logic puzzles as a test?
@dingodog19 Logic puzzles as a test for what?
Apple already recently concluded (in a report that got some media coverage, at least in the tech/IT press) that LLMs *cannot* reason logically. Plenty of additional anecdotal examples illustrating that exist. There exist arguments for the same lack of logical reasoning capability which are based in how LLMs function. There should be little need to repeat that unless you have reason to believe that you'll get a significantly different result.
@eniko