@catsalad if my computer can locally generate images in seconds, that has to be one efficient town π
@flying_saucers @catsalad that's suggesting Llama 3.1 8B and 405B have comparable outputs, or the like.
We can run small models at home relatively quickly, sure, but that's not what the likes of OpenAI or Meta have running behind their flagship APIs.
If you're curious you can compare the recommended hardware for the different variants of the Llama models here: https://llamaimodel.com/requirements/
(I use this as an example because the comparison is so clear.)
@flying_saucers @catsalad that's suggesting Llama 3.1 8B and 405B have comparable outputs, or the like.
We can run small models at home relatively quickly, sure, but that's not what the likes of OpenAI or Meta have running behind their flagship APIs.
If you're curious you can compare the recommended hardware for the different variants of the Llama models here: https://llamaimodel.com/requirements/
(I use this as an example because the comparison is so clear.)