Email or username:

Password:

Forgot your password?
Top-level
blaue_Fledermaus

@catsalad
Aren't the absurd costs only for training the models? Those already trained run cheaply even on home hardware.

2 comments
Lord Caramac the Clueless, KSC

@blaue_Fledermaus @catsalad Exactly. Once a GenAI model exists, it doesn't need more power to generate a 1024x1024 image than playing a high-end computer game at highest detail settings on a big PC with a decent GPU for about a minute.

Lord Caramac the Clueless, KSC

@blaue_Fledermaus @catsalad However, if you use a chatbot to render the image for you, it becomes significantly more computationally expensive. Still nowhere near the energy usage of a residential neighbourhood in the Western world, but enough to play such a game for ten or twenty minutes. The same applies when you're using "prompt magic", which doesn't run your prompt directly through the diffusion model but hands it over to an LLM which then generates an "improved" prompt which then gets used to generate the image. One of the reasons why I don't like Dall-E is that it always does this, there is no way to run a prompt directly through Dall-E without it getting filtered through GPT first. Not only does it add this unnecessary computation, but it also gives you less control over the process.

@blaue_Fledermaus @catsalad However, if you use a chatbot to render the image for you, it becomes significantly more computationally expensive. Still nowhere near the energy usage of a residential neighbourhood in the Western world, but enough to play such a game for ten or twenty minutes. The same applies when you're using "prompt magic", which doesn't run your prompt directly through the diffusion model but hands it over to an LLM which then generates an "improved" prompt which then gets used to...

Go Up