Email or username:

Password:

Forgot your password?
Fabio Manganiello

There are a few generalizations in this article, but it mostly nails my thoughts on the current state of the IT industry.

Why can we watch 4K videos and play heavy games in hi-res on our new laptops, but Google Inbox takes 10-13 seconds to open an email that weighs a couple of MBs?

Why does Windows 10 take 30 minutes to update, when within that time frame I could flash a whole fresh Windows 10 ISO to an SSD drive 5 times?

Why do we have games that can draw hundreds of thousands of polygons on a screen in 16 ms, but most of the modern editors and IDEs can draw a single character on the screen within the same time frame, while consuming a comparable amount of RAM and CPU?

Why is writing code in IntelliJ today a much slower experience compared to writing code in vim/emacs on a 386 in the early 1990s? And don't tell me that autocompletion features justify the difference between an editor that takes 3 MB of RAM and one that takes 5 GB of RAM to edit the same project.

Why did Windows 95 take 30 MB of storage, but a vanilla installation of Android takes 6 GB?

Why does a keyboard app eat 150-200 MB of storage and is often responsible for 10-20% of the battery usage on many phones?

Why does a simple Electron-based todo/calendar app take 500 MB of storage?

Why do we want to run everything into Docker containers that take minutes or hours to build, when most of those applications would also be supported on the underlying bare metal?

Why did we get to the point where the best way of shipping and running an app across multiple systems is to pack it into a container, a fat Electron bundle, or a Flatpak/Snap package - in other words, every app becomes its own mini-OS with its own filesystem and dependencies, each of them with their own installation of libc, gnutils/busybox, Java, Python, Rust, node.js, Spring, Django, Express and all? Why did we decide to solve the problem of optimizing shared resources in a system by just giving up on solving it? Just because we assume that it's always cheaper to just add more storage and RAM?

Why does even a basic hello world Vue/React app install 200-300 MB of node_modules? What makes a hello world webapp 10x more complex than a whole Windows 95 installation?

We keep repeating "developer time is more expensive than computer time, so it's ok for an application to be dead inefficient if that saves a couple of days of engineering work", but I'd argue that even that doesn't apply anymore. I've spent the last couple of years working in companies where it takes hours (and sometimes days) to deliver a single change of 1-2 lines. All that time goes in huge pipelines that nobody understands in their entirety, compilation tasks that pull in GBs of dependencies just because a developer at some point wanted to try a new framework or flavour of programming in a module of 100 LoC, wasted electricity that goes in building and destroying dozens of containers just to run a test, and so on. While pipelines do their obscure work, developers take long, expensive breaks browsing social media, playing games or watching videos, because often they can't do any other work in the meantime - so much for "optimizing for engineering costs".

How come nobody gets enraged at such an inefficient use of both computing and human resources?

Would you buy a car that can run at 1% (or less) of its potential performance, built with a process that used <10% of the available engineering resources? Then why do we routinely buy and use devices that take 10 seconds to open a simple todo app in 2023? No amount of splash screen animations can sugarcoat that bitter pill.

The thing is that we know what's causing this problem as well.

As industries consolidate and monopolies/oligopolies form, businesses have less incentives for investing engineering resources in improving their products - or take risks with the development of new products or features based on customer's demand.

That creates a vicious cycle. Customers' expectation bars lower because they get used to sub-optimal solutions, because that's all they know and that's all they are used to. That drives businesses to take even less risks and enshittify their products even more, as they know that they can get away with even more sub-optimal solutions without losing market share - folks will just buy a new phone or laptop when they realize that their hardware can no longer store more than 20 Electron apps, or when their browser can't keep more than 10 tabs open without swapping memory pages. That drives the bar further down. Businesses are incentivised to push out MVPs at a franctic pace and call them products - marketing and design tricks will cover the engineering gaps anyway. Moreover, now companies have even one more incentive to enshittify their product: if the same software can no longer run on the same device, make money out of the new hardware that people will be forced to buy (because, of course, you've made it hard to repair or replace components on their existing hardware). And the cycle repeats. Until you reach a point where progress isn't about getting new stuff, nor getting better versions of the existing stuff, but just about buying better hardware in order to do the same stuff that we used to do 10-15 years ago.

Note however that it doesn't have to be always like this. The author brings a good counter-example: gaming.

Gamers are definitely *not* ok if a new version of a game has a few more ms latency than the previous one. They buy expensive hardware, and they expect that the software that they run on that hardware makes the best use of the available resources. As a result, gaming companies are pushed to release every time titles that draw more polygons on the screen than the previous version, while not requiring a 2-10x bump in resource requirements.

If the gaming industry hadn't had such a demanding user base, I wouldn't be surprised if games in 2023 looked pretty much like the SNES 2D games back in the early 1990s, while using up 100-1000x more resources.

I guess that the best solution to the decay problem that affects our industry would be if users of non-gaming software started to have similar expectations as their gaming fellows, and they could just walk away from the products that can't deliver on their expectations.

tonsky.me/blog/disenchantment/

14 comments
Niclas Hedhman

@blacklight

Great Points!

My first job as a programmer was on a CP/M computer, that took ~2 seconds to boot from a 5 1/4 inch floppy disk, on a Z80 running at 4MHz (~1MIPS).

I asked many over the years; A modern computer executes billions of instructions per second... Many billions... So what can it possibly be doing, when Apple II got shit done at ~500,000 ops/sec? Including cores/threads, that is at least 5 magnitudes more power.

Niclas Hedhman

@blacklight

A friend pointed me to Parkinson's Law regarding public works projects; en.wikipedia.org/wiki/Parkinso

But my guess is that something very similar happens in large private corporations too.

Niclas Hedhman

@blacklight

I even made a presentation in Beijing at a software development conference, "The Elephant in the Room" about this problem.

At that time (~2014), Twitter had a 1000 IT staff, Google ~20,000 and I asked WHY? and What are all these people doing?

"Now, everyone say after me; I AM TOO STUPID TO WRITE SOFTWARE"

Fabio Manganiello

@niclas As someone who has worked in similar companies, I can confirm that ~30-40% of the engineering time can still be accurately captured by this XKCD xkcd.com/303/ - but now rather than "compiling" you have "the pipeline is running/breaking/waiting for approvals/waiting for another team to deploy a new version of a service we depend on", or "I'm trying to configure the environment so my laptop can run a whole Kubernetes production cluster and keep me warm when it's cold outside".

That's one of the reasons why I've argued that an IT company should never have >1000 employees, or process overhead, loss of product focus, dilution of talent caused by an imbalance between business school folks and engineering folks, and org chart decay through excessive layering will just make it dysfunctional by design.

@niclas As someone who has worked in similar companies, I can confirm that ~30-40% of the engineering time can still be accurately captured by this XKCD xkcd.com/303/ - but now rather than "compiling" you have "the pipeline is running/breaking/waiting for approvals/waiting for another team to deploy a new version of a service we depend on", or "I'm trying to configure the environment so my laptop can run a whole Kubernetes production cluster and keep me warm when it's cold outside".

Anꞇóin Ó B.

@blacklight

A comparison I'd add is predictive text on clamshell mobile phones ("feature phones").

In the 1990s the input had the character appear what felt immediately, and T9 worked rather well.

The hardware on a Nokia 2720 flip is orders of magnitude greater in spec, but has an input delay that's bordering on unusable (feels like it could be half a second sometimes)...

...because of "a web based operating system" named KaiOS.

crab

@blacklight I feel like at least two of these have reasonable answers.

Why containers? Because containers are the only way you're getting a workable security model on a modern Linux system, without running an entire VM. Docker is a piece of garbage and has probably kept container technology back for a decade, but containers have their reasons.

And the same for flatpaks, really. It is, right now, the only way you can distribute a piece of software for Linux and expect it to work anywhere. If you don't, you have to defer to 50 different library versions packaged by 50 different distros who may apply 50 different patches to your software and potentially take months to get a new version through. It's a bad solution but there is no alternative.

The dependency management issue is also true for containers: there is no better way to solve it. Luckily containers are getting better at build times, sharing dependencies and file sizes nowadays, but it's slow going.

Why did we get to this point? Because the security model of Linux is still that of a "worse is better" operating system from 1971 and because distros seemingly haven't improved their dependency management in two decades.

@blacklight I feel like at least two of these have reasonable answers.

Why containers? Because containers are the only way you're getting a workable security model on a modern Linux system, without running an entire VM. Docker is a piece of garbage and has probably kept container technology back for a decade, but containers have their reasons.

crab

@blacklight With the sudden popularity of things like Nix and the development of built-in container support in systemd I do believe there's a better future for Linux dependency management here, I just hope we'll actually get there and won't spend another decade stuck in Docker hell.

That just leaves the problem of GUI toolkits being so bad that everybody would rather use Electron.

Fabio Manganiello

@operand You've raised some valid points, but I feel like some of them require a bit more analysis. For example:

> you have to defer to 50 different library versions packaged by 50 different distros who may apply 50 different patches to your software and potentially take months to get a new version through.

I can talk through first-hand experience here. I'm the main developer of Platypush, which has basically hundreds of optional integrations and addons. Every time I push out a new release, I've got a CI/CD pipeline that does the following:

1. Builds a .deb package that targets Debian stable/oldstable, one that targets the current Ubuntu LTS, one that targets the current Fedora release, and a PKGFILE that targets Arch.

2. Spawns a container for each of these distros, installs the packages, starts the app and runs all the tests (of course that means having high test coverage).

3. If all is green, then packages are automatically uploaded to the relevant repos.

Is it ideal? Definitely not, I'm one of those folks who has been waiting for an end to the fragmentation problem on Linux for two decades. But today there are means to at least mitigate the problem - in my case, 2-3 weeks invested on building a CI/CD pipeline that creates packages that target ~90% of the most common distro configurations out there, and then it's fine if the remaining ~10% either builds from sources, runs a container or installs a Flatpak. It's unlikely that I'll have to touch that pipeline again in the future. It's not ideal, but it's not even impossible to have a piece of software packaged for the most common distro configurations out there - not to the point where we have to entirely give up on solving the dependency sharing problem on bare metal anyways.

> Luckily containers are getting better at build times, sharing dependencies and file sizes nowadays

I see some progress in those areas indeed, but it literally took years to go from "let's start wrapping everything into containers" to "let's figure out a clever and standardized way to avoid replicating the same Alpine/Ubuntu base dependencies across dozens of containers on the same box". Layers are already a step in that direction, but I'd love it if the burden of layering and managing shared dependencies wasn't put on the app developer/packager.

> the development of built-in container support in systemd

I love systemd-containerd. I already migrated many of my production Docker containers to systemd a while ago and I haven't looked back (many of them where anyway started through a systemd service, so I've removed a pointless intermediary). And I also see a lot potential in Podman. But I would also love to see a solution that isn't bound to Linux+systemd. The unfortunate reality is that most of the devs out there use MacOS, and Docker has become so popular because it allows them to easily build, run and test on the fly an Ubuntu or Fedora base image that runs their software in the same environment as the production environment, directly on top of their hardware or even in an IntelliJ tab, without having to configure and run CentOS 6 VMs like many of us used to do until a few years ago. A new successful container system must just be as Mac/Windows-friendly as Docker currently is, or many developers will just think along the lines of "now I have to install a Linux VM again just to run my containers".

> That just leaves the problem of GUI toolkits being so bad that everybody would rather use Electron.

IMHO I still believe that a Web-based (or Web-like) solution is the best way out there. There's plenty of JavaScript developers out there, but finding a skilled Qt/Gtk developer is as rare as finding a white fly - and there's a reason: those frameworks are hard to learn and even harder to master.

The main problem is that JavaScript hasn't grown organically, it has grown with a bunch of frameworks thrown at every problem and the ECMAScript standard trailing behind, to the point that it basically doesn't have a standard library for doing basically anything (even parsing a query string, a cookie or making an HTTP request), and everything is solved with fat node_modules folder made of frameworks that reinvent the wheel hundreds of times.

Electron would have no reason to exist in a world where building a hello world Web app was something doable with a few lines of vanilla JavaScript. That was possible 20 years ago, it was still possible 10 years ago (you just had to add a <script> tag for jQuery), but it's no longer possible now.

@operand You've raised some valid points, but I feel like some of them require a bit more analysis. For example:

> you have to defer to 50 different library versions packaged by 50 different distros who may apply 50 different patches to your software and potentially take months to get a new version through.

PaulDavisTheFirst

@operand @blacklight

> And the same for flatpaks, really. It is, right now, the only way you can distribute a piece of software for Linux and expect it to work anywhere.

This just isn't true. Firefox isn't distributed as a flatpak, but the version from Mozilla runs everywhere. We package Ardour for every version of Linux except nixOS (because they clobber LD_LIBRARY_PATH).

And flatpaks present nasty issues when used with software that can dynamically load 3rd party dlls.

PureTryOut

@blacklight That article hits home indeed. I never understood why huge frameworks like Electron became popular and I sincerely believe it and tools that come along with it the web is going to shit nowadays. You can't just render a simple HTML page anymore, now you need to pull in giant JS frameworks that slow down your PC to a crawl.

I read an article a while ago that advocated for slowing down internet connections on purpose just like we have speed limits for cars. I really agreed with it.

Fabio Manganiello

@bart@fam-ribbers.com The argument for purposefully introducing friction on the infrastructure to push software engineers to write more optimized software is compelling indeed - and it's often pushed by telecom companies that want software companies to pay their fair share of network usage for all the wasted MBs transferred on the wire.

But I'm still hesistant to embrace that argument because I would be an acknowledgement of failure - the failure for the software industry to self-regulate its usage of resources without introducing external constraints and limiting the supply of resources.

> I never understood why huge frameworks like Electron became popular and I sincerely believe it and tools that come along with it the web is going to shit nowadays

As I wrote in another post, I think that it's a failure on the language side.

JavaScript has never had a standard library like many other languages do. Even simple operations like making an HTTP request, parsing the cookies or the query string require either an external library, or writing some functions that reinvent the wheel again and again for different browsers/Web engines. Things have also been slowed down by Microsoft of course - the ECMAScript committee tried to push things forward, but the majority of folks kept using a browser that stubbornly refused to embrace any new thing unless it was developed by its parent company.

Now the Microsoft problem is largely gone, but the consequences remain in the form of a language that is the most widely used for UI development, but it's been in a half-baked state for so long, so too many libraries and frameworks have come in to fill the void.

In an ideal world, it should be possible to use vanilla JavaScript to write a frontend that works on any system, as well as both in a browser and as a stand-alone app. In an ideal world, that shouldn't involve running a Vue/React CLI init to download a few tens of MBs of dependencies, as well as Babel, browserify and tons of other frameworks and libraries, and I shouldn't download another few dozens of MBs just to have static typing support, and another few dozens just to get the ability for my code to run both as stand-alone and in a browser: all that stuff should have been part of the standard language.

@bart@fam-ribbers.com The argument for purposefully introducing friction on the infrastructure to push software engineers to write more optimized software is compelling indeed - and it's often pushed by telecom companies that want software companies to pay their fair share of network usage for all the wasted MBs transferred on the wire.

PureTryOut

@blacklight I see no problem with admitting defeat if it helps fixing things. The "market" clearly can't regulate itself so let's stop giving them the freedom to do so and start imposing artificial constraints. Maybe we can then finally start getting rid of the huge pile of e-waste and reduce power drain rather than just continuously making more and more hardware.

Kote Isaev

@blacklight Fatal error: operation expectations.increase not implemented on User class. Really, how you can imagine a user would "they could just walk away from the products that can't deliver on their expectations."? There is single way for many users to go away from existing apps in many spheres - go offline, gadget-less, zero-IT, no-computers life, but this is a thing a 0.001% of enthusiasts can afford. Ordinary user often already can't do even this "f..k you" move.

Go Up