Email or username:

Password:

Forgot your password?
62 comments
Chilly :donor: 🛡️ :fedora:

@catsalad
So what I took down the entire Icelandic power grid generating a wallpaper. 4k Moopsy wallpapers don't generate themselves, ya know

traecer 🍂🍁

@chillybot @catsalad well, it was #Moopsy wallpaper...exceptions must be made 🤣

Chilly :donor: 🛡️ :fedora:

@traecer
I'm on the phone with Prime Minister Benediktsson and he uh... disagrees.
@catsalad

🦊 Paul Schoonhoven 🍉 🍋

@catsalad @ReneDamkot that's the reason why I speeded up my efforts to move away from Android on my phone. (Years ago I moved from Microsoft to Linux already)

Part is the AI story.
The rest is the datagrab and also the unnecessary large use of resources (power & memory) to do the tasks at hand. Linux runs pretty well on 'old' windowmachines and the same seems with alternative OSs on Android phones.

#Degoogle #AI #datagrab

LInearness

@vosje62 @catsalad @ReneDamkot
What do you plan on moving to / suggest people move to?

🦊 Paul Schoonhoven 🍉 🍋

@LInearness I bought a Fairphone on which you can easily install /e/OS or buy it with it.
There are several options.
As I can't install an alternative OS on my spare phone, I might buy a 2e hand that can be fitted with an alternative OS. (Probably /e/OS, as it will be the same as the FF.)
@catsalad @ReneDamkot

Tofu :blobcatace:​

@catsalad I think this is blown out of proportion, especially considering that genAI models for image generation aren't that big and can run on consumer grade hardware.

Piko Starsider :verified_paw:

@tofu @catsalad Yes, but the energy it takes to train those models is still huge. And they will keep training models as long as people keep using AI "art" services.

Tofu :blobcatace:​

@starsider Yes, but my point is that this prompt is kind of blowing unrelated things out of proportion. An image at this sort of size wouldn't even take a second on a 4090, meaning the amount of energy consumed is going to be pretty low.

Piko Starsider :verified_paw:

@tofu Yes, but we should keep all costs in mind. If there's a high enough demand of local models there will be also demand for online services and for training new models. Making AI art should be frowned upon, and this is just one of the reasons.

Michael

@tofu I would agree with @starsider . You can run models like stable diffusion on your own hardware. Get one of those little power plugs that counts electricity usage. Connect everything. You'll see that generating your image won't take anywhere near the power consumption of a small city. That missinformation is not helpful - AI has it's problems, but you shouldn't go around and tell people that generating one image will boil the oceans

Michael

@tofu @starsider And even training is kind of ok. According to this article GPT3 took 1287 MWh to train or the "annual driving emissions of 112 cars". One is a really helpful tool that can be used by millions, the other is Gary driving to wafflehouse. And to give a little bit of context: Those 500 tons are the CO2 emissions a single cruise ship is spilling out every 2 days. So not great, would be great to train on clean energy, but this critique here is not correct

centralnews.com.au/2024/05/10/

@tofu @starsider And even training is kind of ok. According to this article GPT3 took 1287 MWh to train or the "annual driving emissions of 112 cars". One is a really helpful tool that can be used by millions, the other is Gary driving to wafflehouse. And to give a little bit of context: Those 500 tons are the CO2 emissions a single cruise ship is spilling out every 2 days. So not great, would be great to train on clean energy, but this critique here is not correct

jordan

@mschfr @tofu @starsider None of this would even matter if we could just get our collective shit together as a society and make a big push for green(er) energy infrastructure. Who cares how much energy training and inference takes if it's coming from solar, wind, or nuclear power?

We fight over the wrong things, imo.

Tofu :blobcatace:​

@wagesj45 Yeah, but it's not easy to pull off, especially with people and governments steering from actual green options like nuclear power.

Michael

@tofu @wagesj45 I wouldn't agree with your nuclear power take, but even if so: The majority of the AI data centers are going up somewhere in the USA and the USA is far from steering away from nuclear power. And even we here in Germany are at the lowest CO2/kWh since sometimes in the 19th century even after switching off all nuclear

jordan

@mschfr @tofu USA is also far from steering away from large power hungry data centers. What I'm saying is if we're faced with two struggles, let's pick the right one.

Michael replied to jordan

@wagesj45 @tofu Yeah - AI has a lot of problems, so let's not try to push some false narrative here. There is a lot to critique out there

Piko Starsider :verified_paw:

@mschfr @tofu GPT-3 is ancient at this point. GPT-4 took 24000 MWh, about 20 times more. I don't know about the newer o1 models.

Michael

@starsider @tofu Yeah, but even 20x the energy consumption would give it the CO2 impact of 1,5 month of one single cruise ship. And only a few big players worldwide can even think about training such a model.

Dgar

@catsalad it probably won’t be the full ten minutes… 🤭

blaue_Fledermaus

@catsalad
Aren't the absurd costs only for training the models? Those already trained run cheaply even on home hardware.

Lord Caramac the Clueless, KSC

@blaue_Fledermaus @catsalad Exactly. Once a GenAI model exists, it doesn't need more power to generate a 1024x1024 image than playing a high-end computer game at highest detail settings on a big PC with a decent GPU for about a minute.

Lord Caramac the Clueless, KSC

@blaue_Fledermaus @catsalad However, if you use a chatbot to render the image for you, it becomes significantly more computationally expensive. Still nowhere near the energy usage of a residential neighbourhood in the Western world, but enough to play such a game for ten or twenty minutes. The same applies when you're using "prompt magic", which doesn't run your prompt directly through the diffusion model but hands it over to an LLM which then generates an "improved" prompt which then gets used to generate the image. One of the reasons why I don't like Dall-E is that it always does this, there is no way to run a prompt directly through Dall-E without it getting filtered through GPT first. Not only does it add this unnecessary computation, but it also gives you less control over the process.

@blaue_Fledermaus @catsalad However, if you use a chatbot to render the image for you, it becomes significantly more computationally expensive. Still nowhere near the energy usage of a residential neighbourhood in the Western world, but enough to play such a game for ten or twenty minutes. The same applies when you're using "prompt magic", which doesn't run your prompt directly through the diffusion model but hands it over to an LLM which then generates an "improved" prompt which then gets used to...

Nearly Normal

@catsalad probably only a city full of brown people anyway so who gives a shit?

Kirk

@catsalad That's the tradeoff most small voters opted for. So I say yes.

Alexf24

@catsalad Why the exaggeration? It's only one neighborhood!

Steve

@catsalad if my computer can locally generate images in seconds, that has to be one efficient town 🙃

draeath

@flying_saucers @catsalad that's suggesting Llama 3.1 8B and 405B have comparable outputs, or the like.

We can run small models at home relatively quickly, sure, but that's not what the likes of OpenAI or Meta have running behind their flagship APIs.

If you're curious you can compare the recommended hardware for the different variants of the Llama models here: llamaimodel.com/requirements/

(I use this as an example because the comparison is so clear.)

FurballsNHairballs

@catsalad
Flip that switch !!!
Turn off Granny's life support !!
😐

Daniel Marks

@catsalad As long as that Children's Hospital has diesel generator backup, what's the downside?

Deuchnord

@catsalad it should say "cutting off electricity to 5000 random homes, including yours"

Jonathan Mesiano-Crookston

@catsalad it's very unlikely to be MY small city so it's a YES from me

Luís Silva

@catsalad eheheh, generative AI and LLMs, as they are implemented today, are probably the most inefficient brains ever made! The only positive thing about this is that they are human-built brains. :)

CohenTheBlue

@lmss @catsalad Brains only in the same sense that a puddle of paint is an artist.

Big Ben

@catsalad Something about how this image looks like a screenshot from a webpage made in 2001 feels right

levampyre

@catsalad Paris Marx made a 4 episodes Tech Won't Save Us mini series about the costs of AI and big data recently. Here's the 1st episode: techwontsave.us/episode/241_da

a pup of coffee :v_agender: :bowie: ☕

@catsalad If it was like The Box, and reminded you that someone else pushing the button could be affecting your city

Avi Rappoport (avirr)

@catsalad
“water or energy usage generally sit at opposite ends of a see-saw: if usage of one is decreased, the other must be increased to compensate.”
techpolicy.press/why-we-dont-k

Kadarus

@catsalad It's kind of difficult to measure, generation itself usually doesn't take that much power, training does.

Relax Milano 🇮🇹☀️

@catsalad 😂😂😂
I want to know the author of this #warning
I am working on #GenAI and this will be my next avatar 🤖

#ai #aiwarning

Go Up