@blaue_Fledermaus @catsalad Exactly. Once a GenAI model exists, it doesn't need more power to generate a 1024x1024 image than playing a high-end computer game at highest detail settings on a big PC with a decent GPU for about a minute.
Top-level
@blaue_Fledermaus @catsalad Exactly. Once a GenAI model exists, it doesn't need more power to generate a 1024x1024 image than playing a high-end computer game at highest detail settings on a big PC with a decent GPU for about a minute. 1 comment
|
@blaue_Fledermaus @catsalad However, if you use a chatbot to render the image for you, it becomes significantly more computationally expensive. Still nowhere near the energy usage of a residential neighbourhood in the Western world, but enough to play such a game for ten or twenty minutes. The same applies when you're using "prompt magic", which doesn't run your prompt directly through the diffusion model but hands it over to an LLM which then generates an "improved" prompt which then gets used to generate the image. One of the reasons why I don't like Dall-E is that it always does this, there is no way to run a prompt directly through Dall-E without it getting filtered through GPT first. Not only does it add this unnecessary computation, but it also gives you less control over the process.
@blaue_Fledermaus @catsalad However, if you use a chatbot to render the image for you, it becomes significantly more computationally expensive. Still nowhere near the energy usage of a residential neighbourhood in the Western world, but enough to play such a game for ten or twenty minutes. The same applies when you're using "prompt magic", which doesn't run your prompt directly through the diffusion model but hands it over to an LLM which then generates an "improved" prompt which then gets used to...