Email or username:

Password:

Forgot your password?
David Ho

When they warned about this, I didn’t realize it was about CO₂ emissions from AI.

14 comments
Lyle Solla-Yates

@davidho Of course. Skynet. The clues were all there.

aproposnix

@davidho yup, and don't forget the water wars that will be fueled in part by AI.

City Atlas

@davidho

With NotebookLM Google lets you make a (an amazingly real) fake podcast, a dialogue with a male and female voice, out of any text content you upload. So, I just want to reassure you that these emissions are well spent making fake podcasts, to add to all the real podcasts we don't have time to listen to already.

Hunterrules0_o

@davidho just use a open source model on your gpu

Louis Ingenthron

@davidho The Matrix told us we would blacken the skies to stop the AIs, not to power them up!

jackcole

@davidho No, no, it's about making people so incompetent they can no longer provide essentials for survival and providing lethal answers on Google search, like eating battery acid and shards of glass.

JW Prince of CPH

@davidho Yeah, experts gonna have to be more specific, there's like a half dozen ways this could happen...

Arqtec

@davidho
When AI becomes self aware ?
It will be able to advance its design beyond human control ?
Becoming AI, means being aware of a human threat, and becoming self defending.....If AI controls stuff that we have relinquished ? Then maybe we are on the road to extinction by our Machines?
Hmm?
My Toaster is out to get me !!!
I say I WANT toast
It replies YOU ARE TOAST

Eudaimon ꙮ
@davidho
Well, if you start watching the apparently at least very well researched videos from Robert Miles (Robert Miles AI Safety - https://www.youtube.com/channel/UCLB7AzTwc6VFZrBsO2ucBMg), you'd be forgiven to think that that the heading was indeed talking about AIs themselves. I do believe there's a serious problem with AI alignment, and that the risk is small yet with possible catastrophic consequences. Therefore I think we should apply the same principles behind nuclear war (principles that are being torn down as we speak, though 🫣)
@davidho
Well, if you start watching the apparently at least very well researched videos from Robert Miles (Robert Miles AI Safety - https://www.youtube.com/channel/UCLB7AzTwc6VFZrBsO2ucBMg), you'd be forgiven to think that that the heading was indeed talking about AIs themselves. I do believe there's a serious problem with AI alignment, and that the risk is small yet with possible catastrophic consequences. Therefore I think we should apply the same...
Go Up