Perhaps I can now, finally, know peace.
What they're calling #AI is inevitable because it's just computers, computing, and computers already exist and are everywhere.
We cannot stop it, unless you have a plan to stop all computing, forever. Meanwhile, AI products are becoming more useful to people all the time.
So, when it comes to critique of AI, we have to up our game if we want a future worth living in.
https://redeem-tomorrow.com/the-average-ai-criticism-has-gotten-lazy-and-thats-dangerous
@danilo re: your energy critique, I was eagerly reading to see if you had gotten better stats than I had previously found, but I didn't see any. I don't know if it's worth editing to update, but I was saying exactly this last month: https://mastodon.social/@glyph/111251549055579334
everyone who wants to do this self-righteous finger wagging bullshit about AI’s energy impact needs to explain what we’re going to do about the hundreds of terawatts of mundane data center energy expenditure that’s ALREADY HAPPENING, orders of magnitude greater than AI
This is a software engineer writing this.
If the future of AI must be stopped on energy grounds, shouldn’t all technologists put down tools and disrupt their nearest data center?
It’s absolutely incoherent.
@danilo I honestly don't buy the "useful" argument (cf. https://www.baldurbjarnason.com/2023/ai-research-again/), still, although in most cases I've seen it's more elaborate than that, and goes hand-in-hand with the "black box" argument you gave:
Basically, we can partition the set of tasks into those that AI can solve and those it cannot – and this is not something that changes through technological innovation much. AI gets better at solving tasks it previously could do only mediocre, but the actual set doesn't change because of its nature.
The tasks that AI can solve however are either things that don't need solving (recreational arts), that wouldn't be a problem if AI does it if we had proper base income (commissioned arts/illustrations) and problems that can be solved a lot more efficient by actually solving the root cause (trains instead of automatic driving cars, the classic employee-uses-AI-to-expand-bullets-into-text and manager does the opposite case). This means that AI is either not useful much for society at all except for being a play thing and doesn't warrant wasting resources.
The second set, tasks AI cannot solve, also contains tasks that AI can pretend to solve. Arguably worse.
@danilo I honestly don't buy the "useful" argument (cf. https://www.baldurbjarnason.com/2023/ai-research-again/), still, although in most cases I've seen it's more elaborate than that, and goes hand-in-hand with the "black box" argument you gave:
Basically, we can partition the set of tasks into those that AI can solve and those it cannot – and this is not something that changes through technological innovation much. AI gets better at solving tasks it previously could do only mediocre, but the actual...