This folder contains the necessary files to build a standalone reservoir simulator from JutulDarcy.jl using PackageCompiler.jl.
The main motivation is to create applications for users and systems where Julia is not installed, integration in workflows where compilation is impractical and for users who prefer to run simulations from the command line.
cd mpi_simulator/
julia --project=.
If it is your first time building the simulator, execute the following to add the local JutulDarcyMPI
package to the environment:
] # Enter package mode
dev ./JutulDarcyMPI # develop local package that contains application entry point
If this is your first time building the application, this next step will take some time: JutulDarcy and all other application dependencies are downloaded, precompiled and finally compiled. During the compilation process, a few standard test cases are run.
include("compile_exe.jl")
The output should end with something that looks similar to the following:
[ Info: PackageCompiler: Done
✔ [10m:42s] PackageCompiler: compiling incremental system image
If PackageCompiler reports success, the application has been successfully built for your platform.
The built application will appear in the subdirectory mpi_simulator/compiled_similator
. The entire compiled_simulator
folder and all subdirectories (bin
, lib
, etc) of that folder are required to run the simulator. This is important if you plan on moving the executable to another machine.
You can now try to run the included test case:
.\compiled_simulator\bin\JutulDarcyMPI.exe .\data\SPE1\BENCH_SPE1.DATA
./compiled_simulator/bin/JutulDarcyMPI ./data/SPE1/BENCH_SPE1.DATA
The case should run in less than a second on most computers, indicating that the simulator does not need to compile anything.
You can run a subset of .DATA files. This is done using the GeoEnergyIO.jl parser. Not all keywords are supported. If you have an example you would like to get working, post it at the issue page there.
The compiled simulator can also read cases exported from MRST using the jutul module. There is no special option to run these files as the format is automatically detected from the file extension.
The results are currently written as JLD2 files. This format is a subset of HDF5 and can be read using standard HDF5 readers. You can recover the simulation results in Julia session using the read_results
function:
using Jutul
states, reports = read_results("/path/to/output/folder")
Adding more user friendly output (e.g. well solutions as csv files) is a future goal.
There are many options to that can be used to customize tolerances, linear solvers, time-stepping and output level:
.\compiled_simulator\bin\JutulDarcyMPI.exe
required argument filename was not provided
usage: <PROGRAM> [--tol-cnv TOL_CNV] [--tol-mb TOL_MB]
[--tol-dp-well TOL_DP_WELL]
[--multisegment-wells MULTISEGMENT_WELLS]
[--relaxation RELAXATION]
[--tol-factor-final-iteration TOL_FACTOR_FINAL_ITERATION]
[--max-nonlinear-iterations MAX_NONLINEAR_ITERATIONS]
[--max-timestep-cuts MAX_TIMESTEP_CUTS] [--rtol RTOL]
[--backend BACKEND] [--linear-solver LINEAR_SOLVER]
[--precond PRECOND] [--timesteps TIMESTEPS]
[--timesteps-target-ds TIMESTEPS_TARGET_DS]
[--timesteps-target-iterations TIMESTEPS_TARGET_ITERATIONS]
[--max-timestep MAX_TIMESTEP]
[--initial-timestep INITIAL_TIMESTEP]
[--info-level INFO_LEVEL] [--verbose VERBOSE]
[--output-path OUTPUT_PATH] filename
You can get more information on the different options by passing --help
.
The executable is built with MPI support for parallel execution. To run using MPI, you must call the function using the correct MPI executable. For more information on this topic, see the Julia MPI documentation. If the wrong MPI executable is used, each MPI process will simulate the case separately - leading to slowdown instead of speedup as redundant work is performed.
The easiest way to get started is to install the default Julia MPI executable using the MPI module that is already installed in your build environment:
julia> using MPI
julia> MPI.install_mpiexecjl()
julia> MPI.mpiexec() # Will display the path
You will then get a listing that shows you the path of the MPI executable. In my case, that path is C:\Users\olavm\.julia\artifacts\4a195fc1f9aa324d571de9f2e6efdf6bb3562edf\bin\mpiexec.exe
. I can then use that executable to simulate a case in parallel.
For example, to simulate with 2 processes I could do the following:
C:\Users\olavm\.julia\artifacts\4a195fc1f9aa324d571de9f2e6efdf6bb3562edf\bin\mpiexec.exe -n 2 .\compiled_simulator\bin\JutulDarcyMPI.exe data\SPE1\BENCH_SPE1.DATA
The SPE1 model is tiny and used to demonstrate the concept. Do not expect any performance benefit from using parallel computing for this case.
The precompile_jutul_darcy_mpi.jl
script is used to cover code paths for compilation. If your compiled simulator takes a bit too long to start up, you can add additional cases in this file. These cases are run during the compilation stage and allows you to cover more code paths. The downside is that the cases will then be run during compilation, which may slow things down if you add large or complex cases.
This application was primarily written as a demonstrator for compilation workflows with JutulDarcy. Contributions to ease of use, functionality and documentation are welcome.