Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Core][REP] GPU Memory awareness scheduling #47
base: main
Are you sure you want to change the base?
[Core][REP] GPU Memory awareness scheduling #47
Changes from all commits
981349e
0f73faa
cc21d13
35839fd
38ee994
af09266
20e553b
File filter
Filter by extension
Conversations
Jump to
There are no files selected for viewing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we support string-based syntactic sugar? Feels more pythonic that way (i.e., gpu_memory="3gb")
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
for now we just follow how
memory
is defined. I think the pythonic support can be done separately which covers bothgpu_memory
andmemory
changesThere was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is it possible to express 2 GPUs using gpu_memory? Or is it not allowed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can you specify this in REP?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it's not allowed, since only either one of
num_gpus
orgpu_memory
(1 gpu per request) can be specified in requestThere was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could they both be allowed? If both
num_gpus
andgpu_memory
are specified, then it would require that much memory on that many GPUs.num_gpus
would default to 1, so not specifying it would get the behavior described above. It could be an error condition to specify a fractional value fornum_gpus
if also specifyinggpu_memory
. Thoughts?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you have 40GB gpu
and schedule 1 task with 20GB
and schedule another with with num_gpus=1, would it fail to schedule?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, the second one will fail since the GPU remaining after scheduler 20GB task will be 0.5
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we need the observability section here as this complicates the observability semantics.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
in
ray list nodes
, it will beGPU (resources left) * gpu_memory_per_gpu
which is the constant stored in node label.ray status
,ray list task
andray.available_resources
currently didn't show GPU memory but if we added one, it will be the same asray list nodes
.and yes, basically both gpu and gpu_memory values are subtracted to show the remaining