diff --git a/docs/source/cluster_submission.rst b/docs/source/cluster_submission.rst index 6b7e2f27..3fff1021 100644 --- a/docs/source/cluster_submission.rst +++ b/docs/source/cluster_submission.rst @@ -95,11 +95,10 @@ For example, to specify that a parallelized operation requires **4** processing .. code-block:: python - from flow import FlowProject, directives + from flow import FlowProject from multiprocessing import Pool - @FlowProject.operation - @directives(np=4) + @FlowProject.operation.with_directives({"np": 4}) def hello(job): with Pool(4) as pool: print("hello", job) @@ -112,7 +111,7 @@ All directives are essentially conventions, the ``np`` directive in particular m .. tip:: - Note that all directives may be specified as callables, e.g. ``@directives(np = lambda job: job.doc.np)``. + Note that all directives may be specified as callables, e.g. ``FlowProject.operation.with_directives({"np": lambda job: job.doc.np})``. Available directives -------------------- @@ -147,27 +146,27 @@ Using these directives and their combinations allows us to realize the following .. glossary:: serial: - ``@flow.directives()`` + ``@FlowProject.operation.with_directives()`` This operation is a simple serial process, no directive needed. parallelized: - ``@flow.directives(np=4)`` + ``@FlowProject.operation.with_directives({"np": 4})`` This operation requires 4 processing units. MPI parallelized: - ``@flow.directives(nranks=4)`` + ``@FlowProject.operation.with_directives({"nranks": 4})`` This operation requires 4 MPI ranks. MPI/OpenMP Hybrid: - ``@flow.directives(nranks=4, omp_num_threads=2)`` + ``@FlowProject.operation.with_directives({"nranks": 4, "omp_num_threads": 2})`` This operation requires 4 MPI ranks with 2 OpenMP threads per rank. GPU: - ``@flow.directives(ngpu=1)`` + ``@FlowProject.operation.with_directives({"ngpu": 1})`` The operation requires one GPU for execution. diff --git a/docs/source/flow-group.rst b/docs/source/flow-group.rst index d6a2d7d8..ec2226da 100644 --- a/docs/source/flow-group.rst +++ b/docs/source/flow-group.rst @@ -83,16 +83,15 @@ In the following example, :code:`op1` requests one GPU if run by itself or two G .. code-block:: python # project.py - from flow import FlowProject, directives + from flow import FlowProject class Project(FlowProject): pass ex = Project.make_group(name='ex') - @ex.with_directives(directives=dict(ngpu=2)) - @directives(ngpu=1) - @Project.operation + @ex.with_directives({"ngpu": 2}) + @Project.operation.with_directives({"ngpu": 1}) def op1(job): pass diff --git a/docs/source/recipes.rst b/docs/source/recipes.rst index 01f272ed..cf92aac8 100644 --- a/docs/source/recipes.rst +++ b/docs/source/recipes.rst @@ -253,21 +253,21 @@ You could run this operation directly with: ``mpiexec -n 2 python project.py run MPI-operations with ``flow.cmd`` -------------------------------- -Alternatively, you can implement an MPI-parallelized operation with the ``flow.cmd`` decorator, optionally in combination with the ``flow.directives`` decorator. +Alternatively, you can implement an MPI-parallelized operation with the ``flow.cmd`` decorator, optionally in combination with the ``FlowProject.operation.with_directives`` decorator. This strategy lets you define the number of ranks directly within the code and is also the only possible strategy when integrating external programs without a Python interface. Assuming that we have an MPI-parallelized program named ``my_program``, which expects an input file as its first argument and which we want to run on two ranks, we could implement the operation like this: .. code-block:: python - @FlowProject.operation + @FlowProject.operation.with_directives({"np": 2}) @flow.cmd - @flow.directives(np=2) def hello_mpi(job): return "mpiexec -n 2 mpi_program {job.ws}/input_file.txt" The ``flow.cmd`` decorator instructs **signac-flow** to interpret the operation as a command rather than a Python function. -The ``flow.directives`` decorator provides additional instructions on how to execute this operation and is not strictly necessary for the example above to work. +The ``@FlowProject.operation.with_directives(...)`` decorator provides additional instructions on how to execute this operation. +The decorator ``@FlowProject.operation`` does not assign any directives to the operation. However, some script templates, including those designed for HPC cluster submissions, will use the value provided by the ``np`` key to compute the required compute ranks for a specific submission. .. todo:: @@ -276,7 +276,7 @@ However, some script templates, including those designed for HPC cluster submiss .. tip:: - You do not have to *hard-code* the number of ranks, it may be a function of the job, *e.g.*: ``flow.directives(np=lambda job: job.sp.system_size // 1000)``. + You do not have to *hard-code* the number of ranks, it may be a function of the job, *e.g.*: ``FlowProject.operation.with_directives({"np": lambda job: job.sp.system_size // 1000})``. MPI-operations with custom script templates @@ -327,8 +327,7 @@ For example, assuming that we wanted to use a singularity container named ``soft .. code-block:: jinja - @Project.operation - @flow.directives(executable='singularity exec software.simg python') + @Project.operation.with_directives({"executable": "singularity exec software.simg python"}) def containerized_operation(job): pass