-
Notifications
You must be signed in to change notification settings - Fork 558
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add SCCACHE_CC_PREFIX for wrapping C compiler #132
Comments
The environment variable should be called I think it would be safe to make no assumptions about what the prefix tool does and treat it as a transparent launcher for the command line that is executed when a cache miss occurrs. E.g. if the following call gives a cache miss: sccache gcc -c -o hello.o hello.c Then instead of calling gcc directly, just expand to: $SCCACHE_PREFIX gcc -c -o hello.o hello.c (where |
I made a quick hack that prefixes the compiler command with @@ -364,7 +364,9 @@ pub fn compile<T>(creator: &T,
Language::ObjectiveCxx => "objective-c++",
};
- let mut attempt = creator.clone().new_command_sync(executable);
+ let launcher = "/usr/bin/icecc";
+ let mut attempt = creator.clone().new_command_sync(launcher);
+ attempt.arg(executable);
attempt.arg("-x").arg(language)
.arg("-c") ...and I can see that icecc is being run. However all compilations run on my local machine so I see no speedup. sccache does not seem to spawn more compilation processes than I have cores (I do Edit: Raising the thread count in |
That's the only hardcoded limit in sccache, and it's not a concurrent task limit (sccache will currently run as many compile jobs as you throw at it), it's just a limit on concurrent CPU-bound tasks that we run on a background thread pool: things like hashing input files: For C compilation it's really only used for hashing the contents of the compiler binary (which is only done once per compiler path, and cached) and for writing out the compiler outputs to disk from cache hits. There is an open PR to add jobserver support which changes things a bit, but that hasn't yet been merged. I don't know much about how icecream works--does it have some default limit based on local core count? |
I'm not sure about the inner workings of icecc either, but the typical scenario is that you can prefix your compilation command with The only way that icecc can give a performance boost is for it to run more compilation jobs (in this case, resolve more cache misses) in parallel than there are CPUs on your machine. Not sure, but maybe there's a difference between starting these jobs from parallel threads vs from parallel processes? |
icecream won't start 100 jobs unless it has 100 job slots available across its network. |
Well, we have some 200 cores in our LAN, so that's not the issue. Seems like somewhere along the line there's a thread/process throttle. All sccache clients connect to a single local sccache server, and that server will spawn compile jobs - in this case start icecc client processes. Those icecc clients will in turn synchronize with the local icecc daemon (that I believe has some kind of intelligent throttling to allow for many distributed compilations but only a few local compilations). Seems to me that the possible points of throttling are:
We will have to log and debug more to know which it is. |
There's certainly not any intentional limiting of concurrent jobs in the sccache server, but it's possible that something doesn't work as expected in this scenario. |
Is it possible to make this set-up work by simply changing the PATH? On my system icecc gets installed with symbolic links to |
Adding that SCCACHE_PREFIX optional feature would be very nice, thank you. |
With ccache, I set the
CCACHE_PREFIX
output variable toicecc
to have C and C++ compilation done in an icecream cluster. There should be an equivalent option to do this with sccache.My current mozconfig for Firefox compilation looks like this:
Unfortunately, doing
ac_add_options --with-ccache='sccache icecc'
does not work because the configure script checks if'sccache icecc'
is executable.The text was updated successfully, but these errors were encountered: