You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Since different job types have different relative priorities, the number of available resources at each scheduled point depends on the job type being matched. Furthermore, the score of each vertex should depend on the number of elastic, adaptive, and rigid jobs allocated or reserved at the point in time in question.
So far I've been refactoring flux-sched to accommodate a planner_t for each job type. This approach has undesirable effects: the additional overhead of multiple planners, the code bloat of added logic related to checking which job types are running at various times (and their relative priorities), and the inelegance of querying the g[v].schedule.allocations or g[v].schedule.reservations to get jobids. Note that my testing indicates that the exclusive planner will also need a separate instance per job type.
Beyond that, if we allow adaptive jobs to hold reservations:
intdfu_impl_t::upd_plan (vtx_t u, constsubsystem_t &s, unsignedint needs,
will need to check which and how many job types are allocated/reserved during jobmeta.at. Such functionality is not possible clumsy with the current planner design.
Should we extend planner and scheduled_point_t to store jobids and jobtypes? An inspection of planner.c suggests that this could be an involved undertaking.
The text was updated successfully, but these errors were encountered:
@dongahn and I discussed this issue today. We decided that modifying planner to include the specified metadata will add significant, undesirable complexity. @dongahn suggested that I subsume this information and the logic checks for availability into planner_multi_t. The implication of this change is that each planner_t will need to be refactored to be a planner_multi_t.
This question and potential enhancement is related to (and may supersede) PR #584, as well as issues #576, #558, and #552.
Scheduling of adaptive and elastic jobs extends the intention of:
flux-sched/resource/planner/planner.c
Line 823 in 0524359
So far I've been refactoring flux-sched to accommodate a
planner_t
for each job type. This approach has undesirable effects: the additional overhead of multiple planners, the code bloat of added logic related to checking which job types are running at various times (and their relative priorities), and the inelegance of querying theg[v].schedule.allocations
org[v].schedule.reservations
to getjobid
s. Note that my testing indicates that the exclusive planner will also need a separate instance per job type.Beyond that, if we allow adaptive jobs to hold reservations:
flux-sched/resource/traversers/dfu_impl_update.cpp
Line 122 in af4447f
jobmeta.at
. Such functionality isnot possibleclumsy with the current planner design.Should we extend planner and
scheduled_point_t
to storejobid
s andjobtype
s? An inspection ofplanner.c
suggests that this could be an involved undertaking.The text was updated successfully, but these errors were encountered: