Email or username:

Password:

Forgot your password?
Danilo, from the Gerentate

Perhaps I can now, finally, know peace.

What they're calling #AI is inevitable because it's just computers, computing, and computers already exist and are everywhere.

We cannot stop it, unless you have a plan to stop all computing, forever. Meanwhile, AI products are becoming more useful to people all the time.

So, when it comes to critique of AI, we have to up our game if we want a future worth living in.

redeem-tomorrow.com/the-averag

11 comments
Glyph

@danilo re: your energy critique, I was eagerly reading to see if you had gotten better stats than I had previously found, but I didn't see any. I don't know if it's worth editing to update, but I was saying exactly this last month: mastodon.social/@glyph/1112515

Danilo, from the Gerentate

@glyph that's super interesting context! four fucking orders of magnitude…

yeah, I'll drop a link in, do you have a citation on the GPT-3 training energy costs?

Glyph

@danilo This is a better source than the one I had at the time, so maybe only 3 orders of magnitude :). And, crucially, this one talks about *operating*, which I didn't include previously.

Danilo, from the Gerentate

@glyph yeah, that's a pretty wild gulf right there

and again, the incentive is to drive down consumption and improve efficiency, so the long term trend looks like computing's performance per watt trend, not bitcoin’s tire fire

Danilo, from the Gerentate

everyone who wants to do this self-righteous finger wagging bullshit about AI’s energy impact needs to explain what we’re going to do about the hundreds of terawatts of mundane data center energy expenditure that’s ALREADY HAPPENING, orders of magnitude greater than AI

This is a software engineer writing this.

If the future of AI must be stopped on energy grounds, shouldn’t all technologists put down tools and disrupt their nearest data center?

It’s absolutely incoherent.

Dan Hulton @danhulton
5m
@danilo It's not about "good computer
or not good computer". We currently
can't manage to do enough of those
things to make a significant enough
dent in our energy usage to avoid
hitting a warming target that was long
ago considered "worst-case". Al by
itself uses more electricity than many
small countries, in the limited use
cases it's currently used in. It is fuel
being thrown on an already-raging fire.
Ignoring that is some wild levels of
ignorance.
Danilo, from the Gerentate

If a social worker wants to make this argument, a person who builds houses, I think they have a little more standing

But every software professional who keeps going to work pays their rent or mortgage on the back of ENORMOUS energy usage, an entire constellation of devices burning energy, on a rack, in the home, pockets

Deciding we can’t build more data centers because that’s bad for the earth, but accepting existing data centers doing a different category of computing that pays your bills…

lj·rk

@danilo I honestly don't buy the "useful" argument (cf. baldurbjarnason.com/2023/ai-re), still, although in most cases I've seen it's more elaborate than that, and goes hand-in-hand with the "black box" argument you gave:

Basically, we can partition the set of tasks into those that AI can solve and those it cannot – and this is not something that changes through technological innovation much. AI gets better at solving tasks it previously could do only mediocre, but the actual set doesn't change because of its nature.

The tasks that AI can solve however are either things that don't need solving (recreational arts), that wouldn't be a problem if AI does it if we had proper base income (commissioned arts/illustrations) and problems that can be solved a lot more efficient by actually solving the root cause (trains instead of automatic driving cars, the classic employee-uses-AI-to-expand-bullets-into-text and manager does the opposite case). This means that AI is either not useful much for society at all except for being a play thing and doesn't warrant wasting resources.

The second set, tasks AI cannot solve, also contains tasks that AI can pretend to solve. Arguably worse.

@danilo I honestly don't buy the "useful" argument (cf. baldurbjarnason.com/2023/ai-re), still, although in most cases I've seen it's more elaborate than that, and goes hand-in-hand with the "black box" argument you gave:

Basically, we can partition the set of tasks into those that AI can solve and those it cannot – and this is not something that changes through technological innovation much. AI gets better at solving tasks it previously could do only mediocre, but the actual...

lj·rk

@danilo There's a lot of nice things AI could do. But not in this capitalist system, the incentives are simply wrong. I don't think we can solve "The AI Question" without questioning the system.

Go Up