Reduce overhead of sampling profiler by having only one thread do it#6433
Reduce overhead of sampling profiler by having only one thread do it#6433
Conversation
steven-johnson
left a comment
There was a problem hiding this comment.
LGTM, but the explanatory description in this PR should really be added in code comments somewhere.
|
|
||
| string pipeline_name; | ||
|
|
||
| bool in_fork = false, in_parallel = false, in_leaf_task = false; |
There was a problem hiding this comment.
ubernit: this is an unusual formatting for our code; we almost always put member var declarations one-per-line.
|
I'm a bit nervous about how hard it is to test this. I'd appreciate it if a someone could run an important production pipeline at Google before and after and tell me if the profile looks reasonable. |
|
Actually a systematic test of the app suite is probably enough. I'll do that and report back. |
|
The geomean overhead of the old profiler is 19% across the apps (usually it's zero overhead, but there are a couple of apps with fine-grained compute where the overhead is large). The new profiler has geomean 3% overhead. The few profiles I've spot-checked look reasonable. Merging. |
The current built-in profiler has a lot of overhead for fine-grained compute_at schedules. E.g. for the bgu app it inflates runtime by about 50% to turn on the profiler. This is happening because all threads are writing their current state to the same cache line, causing a lot of cross-cache traffic. Each one of these writes is effectively a cache miss.
This PR changes it so that whenever we have lots of threads all doing the same thing (so in a leaf parallel loop body and not inside a fork node), one of them gets elected to write to that status field. The election is done by racing to grab a pipeline-scope token using an atomic op. The winner does the reporting. This speeds things up in two ways: First, the threads that don't write don't incur the cache misses. Second, the thread that does write can keep that line in cache, with the sampler thread just snooping on the bus traffic when it wants to read instead of invalidating the cache line (assuming I remember how MESI works properly).