@chandlerc I'm not a fan of using %-diffs to make an argument around effectiveness of performance improvements. More often than not, these numbers just lead people astray.
For all we know, the 0.3% penalty might just be so small because it's being overshadowed by some other severe inefficiency in the codebase.
There's an interesting effect where inefficient code will suffer *less* from adding *more* inefficient code, because it's already bottlenecked.
@dist1ll So, if you look at the referenced blog post[1], we actually clarified what this represented. This is 0.3% across Google's entire main production fleet. Our fleet performance is dominated by the hottest services, a relatively small %-age, your classical long-tailed distribution. Those services are **incredibly** optimized systems. We have large teams doing nothing but removing every tiny inefficiency we can find.
[1]: https://security.googleblog.com/2024/11/retrofitting-spatial-safety-to-hundreds.html
@dist1ll So, if you look at the referenced blog post[1], we actually clarified what this represented. This is 0.3% across Google's entire main production fleet. Our fleet performance is dominated by the hottest services, a relatively small %-age, your classical long-tailed distribution. Those services are **incredibly** optimized systems. We have large teams doing nothing but removing every tiny inefficiency we can find.