ecWAM
The ECMWF wave model ecWAM
The ECMWF Ocean Wave Model (ecWAM) describes the development and evolution of wind generated surface waves and their height, direction and period. ecWAM is solely concerned with ocean wave forecasting and does not model the ocean itself: dynamical modelling of the ocean can be done by an ocean model such as NEMO.
- ecWAM may be used as a standalone tool that can produce a wave forecast driven by external forcings provided via GRIB files.
- Alternatively it can be used in a coupled mode where it provides feedback and receives forcings from
- the atmospheric forecast model IFS.
- the dynamic ocean model NEMO.
For more information, please go to https://confluence.ecmwf.int/display/FUG/2.2+Ocean+Wave+Model+-+ECWAM
ecWAM is distributed under the Apache License Version 2.0.
See LICENSE
file for details.
- Linux
- Apple MacOS
Other UNIX-like operating systems may work too out of the box.
- Fortran and C compiler, and optionally C++ compiler
- CMake (see https://cmake.org)
- ecbuild (see https://github.com/ecmwf/ecbuild)
- fiat (see https://github.com/ecmwf-ifs/fiat)
- eccodes (see https://github.com/ecmwf/eccodes)
Further optional dependencies:
- MPI Fortran libraries
- multio (see https://github.com/ecmwf/multio)
- ocean model (e.g. NEMO or FESOM)
- fypp (see https://github.com/aradi/fypp)
- field_api (see https://git.ecmwf.int/projects/RDX/repos/field_api/browse)
- loki (see https://github.com/ecmwf-ifs/loki)
Some driver scripts to run tests and validate results rely on availability of:
- md5sum (part of GNU Coreutils; on MacOS, install with
brew install coreutils
) - Python with pyyaml package
Building with field_api requires the following:
- Python with pyyaml package
- fypp
Environment variables
$ export ecbuild_ROOT=<path-to-ecbuild>
$ export MPI_HOME=<path-to-MPI>
$ export CC=<path-to-C-compiler>
$ export FC=<path-to-Fortran-compiler>
$ export CXX=<path-to-C++-compiler>
If you want to pre-download or install extra data files, run in the source-directory before CMake (re)configuration:
$ share/ecwam/data/populate.sh
The preferred method for building ecWAM is to use the bundle definition shipped in the main repository. For this please build the bundle via:
$ ./ecwam-bundle create # Checks out dependency packages
$ ./ecwam-bundle build [--build-type=<build-type>] [--arch=<path-to-arch>] [--option]
The CMAKE_BUILD_TYPE
can be specified as an option at the build step:
--build-type=<Debug|RelWithDebInfo|Release|Bit>
default=RelWithDebInfo (typically -O2 -g
)
Environment variables and compiler flags relevant to specific architectures can be set by specifying the corresponding arch file at the build step. For example, to build on the ECMWF Atos system using Intel compilers and the hpcx-openmpi MPI
library:
--arch=arch/ecmwf/hpc2020/intel/2021.4.0/hpcx-openmpi/2.9.0
The following bundle options can also be set at the build step:
--without-mpi
- Disable MPI--without-omp
- Disable OpenMP--with-field_api
- Build using FIELD_API repo insource/field_api
Additional CMake
options can also be configured at the build-step using the following format:
--cmake="SOME_OPTION=<arg>"
For example:
--cmake="DENABLE_TESTS=<ON|OFF>"
--cmake="DCMAKE_INSTALL_PREFIX=<install-prefix>"
The above command can also be used to control compilation flags (only when defaults are insufficient) by modifying the following CMake
variables:
DOpenMP_Fortran_FLAGS=<flags>
DCMAKE_Fortran_FLAGS=<fortran-flags>
DCMAKE_C_FLAGS=<c-flags>
Once this has finished successfully, ecWAM can be installed as follows:
$ cd build
$ make install
An informational tool ecwam [--help] [--info] [--version] [--git]
is available upon compilation
and can be used the to verify compilation options and version information of ecWAM.
Optionally, tests can be run to check succesful compilation, when the feature TESTS is enabled (-DENABLE_TESTS=ON
, default ON)
$ ctest
Following are instructions to run ecWAM as a standalone wave forecasting tool.
A YAML configuration file is required. See tests directory for examples.
To run, use the commands listed in following steps, located in the build or install bin
directory.
Each of following commands can be inquired with the --help
argument.
If --run-dir
argument is not specified, or ECWAM_RUN_DIR
environment variable is not set,
then the current directory is used.
If --config
argument is not specified, it is assumed that a file called config.yml
is present in the <run-dir>
.
- Create bathymetry and grid tables
ecwam-run-preproc --run-dir=<run-dir> --config=<path-to-config.yml>
This command generates bathymetry data files as specified by configuration options.
As bathymetry data files are large and require heavy computations they are
cached for later use in a directory which can be chosen with the --cache
argument, or
ECWAM_CACHE_PATH
environment variable.
By default the cache path will be $HOME/cache/ecwam
unless on the ECMWF HPC it is in
$HPCPERM/cache/ecwam
.
Bathymetry data files can also be searched for in a hierarchy of cache-like directories
specified with the ECWAM_DATA_PATH
variable containing a ':'-separated list of paths
(like $PATH
). If not found, they are attempted to be downloaded from URL
https://get.ecmwf.int/repository/ecwam. If still not available, they will be computed.
The cache path will then be populated with computed, or downloaded data,
or with symbolic links to found data in the ECWAM_DATA_PATH
s.
Grid tables are always computed and never cached. THey are placed in the <run-dir>
- Create initial conditions
ecwam-run-preset --run-dir=<run-dir> --config=<path-to-config.yml>
As a result files, binary files of the form <run-dir>/restart/BLS*
and <run-dir>/restart/LAW*
are created.
They contain all initial conditions required for the wave model run from "cold start".
This command requires surface wind and sea-ice-cover input, at initial simulation time, provided in GRIB format.
The configuration file must specify this file. For several benchmark or test cases,
they are retrieved in similar fashion as the bathymetry files (see above).
This package also contains some scripts to generate MARS requests to retrieve data from the ECWMF operational forecast or from the ERA5 reanalysis data set. This is useful to generate new tests or for longer runs.
- Run wave model
ecwam-run-model --run-dir=<run-dir> --config=<path-to-config.yml>
With initial conditions, forcings, and grid tables in place we can run the actual wave model.
The advection and physics time step needs to be configured via the configuration file in accordance
the grid resolution.
The configuration file offers options to output GRIB output fields at regular time intervals, or
binary restart files similar to the initial condition files generated in step 2.
After the run, the output files will be in <run-dir>/output/
and log files will be in <run-dir>/logs/model
.
One log file called statistics.log
contains computed norms which can be used to validate results.
Such validation will occur automatically when the configuration file contains a validation
section.
See tests directory for example configuration files.
Above commands by default run without MPI. To use MPI, or apply a custom command prefixed to the binary execution, there are following options:
-
Use argument
--launch="ecwam-launch -np <NTASKS> -nt <NTHREADS>"
.ecwam-launch
is an internal "smart" launcher that chooses a good launcher depending on availability and the used platform. It will also setexport OMP_NUM_THREADS=<NTHREADS>
for you. The unit-tests are automatically configured to use this when invoked viactest
. -
Use arguments
-np <NTASKS> -nt <NTHREADS>
. This is equivalent to the above, and internally uses theecwam-launch
launcher. -
Use any other custom command, e.g.
--launch="srun -n <NTASKS> -c <NTHREADS>"
for full control. Note that this does not automatically export theOMP_NUM_THREADS
variable.
Note that only ecwam-run-model
currently supports MPI.
The calculation of the source-terms in ecWam, i.e. the physics, can be offloaded to the GPU. There are three GPU adapted variants of the ecWam physics (in ascending order of performance):
- Single-column-coalesced (scc): Fuse vector loops and promote to the outermost level to target the SIMT execution model
- scc-stack: The scc transformation with a pool allocator used to allocate temporary arrays
- scc-cuf: A CUDA Fortran variant of the scc transformation. Compile-time constants are used to define temporary arrays.
Single-node multi-GPU runs are also supported.
The first two GPU variants can be generated at build-time using ECMWF's source-to-source translation toolchain Loki. The 'scc-cuf' variant has also been generated via Loki, but this transformation is not yet fully automated and thus cannot be run at build-time.
To build the 'scc' and 'scc-stack' variants, the option --with-loki
must be passed at the bundle build step. To build the 'scc-cuf' variant, the option --with-cuda
needs to be specified.
The ecwam-bundle also provides appropriate arch files for the nvhpc suite on the ECMWF ATOS system.
Once built, the GPU-enabled variants can be run by passing an additional argument to the runner script:
ecwam-run-model --run-dir=<run-dir> --config=<path-to-config.yml> --variant=<loki-scc/loki-scc-stack/scc-cuf>
Please note that the 'scc-cuf' variant is currently only supported for the 'O320' grid.
The loki-scc variant uses the CUDA runtime to manage temporary arrays and needs a large NV_ACC_CUDA_HEAPSIZE
, e.g.
NV_ACC_CUDA_HEAPSIZE=8G
.
For running with multiple OpenMP threads and grids finer than O48
, OMP_STACKSIZE
should be set to at least 256M
.
- On macOS arm64 with gfortran 12.2, and Open MPI 4.1.4, and with compilation
with flag
-ffpe-trap=overflow
, the execution ofecwam-preproc
andecwam-chief
needs to be launched withmpirun -np 1
, even for serial runs in order to avoid a floating point exception during during call toMPI_INIT
. The flag-ffpe-trap=overflow
is set e.g. forDebug
build type. Floating point exceptions on arm64 manifest as aSIGILL
.
Please report bugs using a GitHub issue. Support is given on a best-effort basis by package developers.
Contributions to ecWAM are welcome. In order to do so, please open a GitHub issue where a feature request or bug can be discussed. Then create a pull request with your contribution. All contributors to the pull request need to sign the contributors license agreement (CLA).