Email or username:

Password:

Forgot your password?
Top-level
Chandler Carruth

@dist1ll So, if you look at the referenced blog post[1], we actually clarified what this represented. This is 0.3% across Google's entire main production fleet. Our fleet performance is dominated by the hottest services, a relatively small %-age, your classical long-tailed distribution. Those services are **incredibly** optimized systems. We have large teams doing nothing but removing every tiny inefficiency we can find.

[1]: security.googleblog.com/2024/1

4 comments
dist1ll

@chandlerc (Thanks for the articles and response)

I'm curious, how much of that optimization is done on the infra side, compared to the application side? I was under the impression that orgs prioritizing infra optimizations, like PGO, data structures, stdlib stuff like memcpy, improving compilers etc.

Perhaps I'm way off base. I guess what I'm curious about is how much effort is spent on application-specific optimizations, things that perhaps *don't* carry over to other parts of the codebase.

Chandler Carruth

@dist1ll The larger applications have their own teams driving application-side optimizations. That covers a *lot* of the larger applications.

And we then also have a large team that drives infrastructure level optimizations just like what you mention.

It's a joint effort, and both teams talk extensively. So for these systems, they are *very* well optimized. There are huge incentives to find and fix any significant inefficiencies.

dist1ll

@chandlerc Makes sense. In that case, congrats for getting such low overheads! Happy to see much of the long-standing FUD around efficient spatial safety challenged.

Go Up