Email or username:

Password:

Forgot your password?
Top-level
dist1ll

@chandlerc I'm not a fan of using %-diffs to make an argument around effectiveness of performance improvements. More often than not, these numbers just lead people astray.

For all we know, the 0.3% penalty might just be so small because it's being overshadowed by some other severe inefficiency in the codebase.

There's an interesting effect where inefficient code will suffer *less* from adding *more* inefficient code, because it's already bottlenecked.

5 comments
Chandler Carruth

@dist1ll So, if you look at the referenced blog post[1], we actually clarified what this represented. This is 0.3% across Google's entire main production fleet. Our fleet performance is dominated by the hottest services, a relatively small %-age, your classical long-tailed distribution. Those services are **incredibly** optimized systems. We have large teams doing nothing but removing every tiny inefficiency we can find.

[1]: security.googleblog.com/2024/1

@dist1ll So, if you look at the referenced blog post[1], we actually clarified what this represented. This is 0.3% across Google's entire main production fleet. Our fleet performance is dominated by the hottest services, a relatively small %-age, your classical long-tailed distribution. Those services are **incredibly** optimized systems. We have large teams doing nothing but removing every tiny inefficiency we can find.

dist1ll

@chandlerc (Thanks for the articles and response)

I'm curious, how much of that optimization is done on the infra side, compared to the application side? I was under the impression that orgs prioritizing infra optimizations, like PGO, data structures, stdlib stuff like memcpy, improving compilers etc.

Perhaps I'm way off base. I guess what I'm curious about is how much effort is spent on application-specific optimizations, things that perhaps *don't* carry over to other parts of the codebase.

Chandler Carruth

@dist1ll The larger applications have their own teams driving application-side optimizations. That covers a *lot* of the larger applications.

And we then also have a large team that drives infrastructure level optimizations just like what you mention.

It's a joint effort, and both teams talk extensively. So for these systems, they are *very* well optimized. There are huge incentives to find and fix any significant inefficiencies.

dist1ll

@chandlerc Makes sense. In that case, congrats for getting such low overheads! Happy to see much of the long-standing FUD around efficient spatial safety challenged.

Go Up