Skip to content

Commit

Permalink
create group of n_task before submitting all the tasks-- seems to wor…
Browse files Browse the repository at this point in the history
…k for daks3
  • Loading branch information
lbesnard committed Jun 14, 2024
1 parent 7252697 commit dd16e4d
Showing 1 changed file with 3 additions and 0 deletions.
3 changes: 3 additions & 0 deletions aodn_cloud_optimised/lib/CommonHandler.py
Original file line number Diff line number Diff line change
Expand Up @@ -497,6 +497,9 @@ def wait_for_no_workers(client):
# the consecutive parallel task don't think it's an empty dataset.
# TODO: my code seems to work fine in parallel instead of being sequential, however if too many tasks are put at once,
# , even like 20, everything seems to be very slow, hangs. I never have the patience to wait
# TODO: cpu for zarr never seem to exceet 25%, so maybe a smaller machine would be better? and memory could be
# smaller . I didnt seem more than 5gb used, to append a zarr file. however the memory leak is growing over
# time, so maybe a could idea to restart the cluster every n=50 files

submit_tasks_in_batches(client, task, obj_ls, n_tasks)

Expand Down

0 comments on commit dd16e4d

Please sign in to comment.