-
Notifications
You must be signed in to change notification settings - Fork 51
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
WIP: enable RFC 14 resource ranges #6632
base: master
Are you sure you want to change the base?
Conversation
b3d55c2
to
216a80e
Compare
216a80e
to
3382974
Compare
Problem: version check in jj.c triggers fatal error if version != 1 which will be needed for resource ranges. Change the error to instead be a warning that version numbers other than 1 are unsupported, but execution will continue.
Problem: now that version numbers other than 1 continue to execute in libjob, several tests fail that expect such jobs to terminate. Modify the tests to expect successful completion, and detect new warning message in t0022.
Problem: version check in jobspec.c triggers fatal error if version!=1 which will be needed for resource ranges. Change the error to instead be a warning that version numbers other than 1 are unsupported, but execution will continue.
Problem: other commits in this PR allow version numbers other than 1, so we need a different parameter in 2608 to cause early shell error. Modifies the early fatal error test in t2608 to use an integer instead of required dictionary for '.attributes.system.environment'.
Problem: jobspec_parse() function needs information on actual resource allocation in order to resolve ranges in jobspec, but is currently just getting jobspec string. As an initial step to simplify the dataflow to jobspec_parse(), move the initial conversion of jobspec from string to json and field extraction from jobspec_parse() into lookup_jobspec_get().
Problem: counts for resources are currently being parsed only from jobspec, but if ranges are used then values cannot be determined. Modify shell init data flow to use the actual allocation from the scheduler in R to determine node and core counts.
Problem: slots are not included in the RFC 20 resource set R that is returned by the scheduler, so if both slots and cores are ranges, one must resolve them consistently (i.e. guess what the scheduler did) Add infrastructure for parsing and looping over RFC 14 ranges and use it to determine a consistent node/slot/core count combination
Problem: the shell initialization is now getting most of its resource info from R, not the jobspec, so this test is already mostly useless, and should eventually be completely obselete if task information can be added to R Remove the test, as refactoring it would be a lot of work that should eventually be removed anyway
3382974
to
abf8e3e
Compare
Thanks for your well considered comments and for taking the time to study the Flux design. Some of the details have spilled out of our caches (mine at least) so there may be a bit of delay paging it back in - apologies in advance.
That seems like a sensible goal to me as long as we can keep the production systems on well trodden/tested pathways with the validator.
I think perhaps that was the original plan when jobspec was designed, but IRL we found we needed to parse in the shell to generate the taskmap. Also in the flux-coral2 shell plugin for Cray PMI. |
@garlick Thank you for the reply! I know this stuff probably isn't high priority for you guys what with El Capitan in focus, but I really appreciate the willingness to engage on other ideas too :)
I actually think these are both good examples of how currently the only reason these functions are going to the jobspec for resource information at all is because the slot information wasn't available in R. This means the |
This is my initial attempt to investigate what is required to enable resource ranges in jobspec à la RFC 14. Very rough at this point (in particular there is lots of work to do on the testsuite) but I am interested in discussion about some potential design considerations that have come up. The code to this point allows jobspec with ranges that is otherwise V1 compliant to be allocated and run successfully, but at least the job-list service still issues a non-fatal error, and I've just ignored sched-simple at this point.
Firstly, because flux-sched already largely supports ranges, once a couple of explicit version checks are tweaked in libjob and shell, we can already get back a resource set allocation from fluxion. From a design perspective, my personal opinion about these version checks is that it should be exclusively the remit of the validator to actually fail jobspec purely based on the version number. If a jobspec passes validation, then it seems to me that other components should at most warn about unsupported version numbers, but otherwise try their best to handle the rest of the jobspec (in future, I suppose a version check could also determine different code pathways for different versions). This would allow someone in future to define an extension of a supported version and not have other components arbitrarily fail even when all the information they require is present and conformant.
Secondly, the lack of information about slots in RFC 20 resource sets is problematic. In short, in my opinion the shell really shouldn't have to parse the jobspec resource section at all to determine what resources it will run on, it should just be able to consult the R allocation returned by the scheduler. flux-sched can already handle ranges for any/all of the resources in V1 jobspec, but if a range is used for the slot resource then the slot count decided by the scheduler is not recorded in R. If a fixed count was used for the cores then the slot count can be determined fairly easily in V1, but if a range was also used for the cores then the shell has to re-determine (guess, really) what the chosen slot and core counts should be (e.g. if 6 cores are allocated per node in R, then you could potentially have slot:core counts of 1:6, 2:3, 3:2, or 6:1).
In my mind the ideal long-term solution is to have slot information included in the allocated resource set; I know it's a "virtual" resource, but it's still part of the scheduler's decision making, and especially looking ahead to more complicated possibilities of canonical jobspec it seems like it will definitely be important to preserve the slot information in the resource graph allocated by the scheduler.
For now, I've added infrastructure to the shell initialization that
guessesresolves a slot:core count combination if needed, but it perhaps raises the question of whether it even makes sense to allow ranges for both slot and core resources simultaneously? Philosophically, I generally prefer having fewer restrictions, lest some future user come up with a strange use case one never predicted, but requiring a fixed count for at least one of the slot or core resources does greatly simplify things at this point, and it doesn't seem too restrictive to forbid leaving it entirely to the scheduler to decide what combination of number of slots and cores are allocated!