-
Notifications
You must be signed in to change notification settings - Fork 54
Running CTSM mizuRoute with debugger in cheyenne
This page is adapted from https://github.com/ESMCI/cime/issues/2817
First, install ARM Forge on your local computer from arm site
On cheyenne, load ARM FORGE if it is not loaded. Be sure to load the same version as client version installed on your local computer
module load arm-forge
On your local computer, open ARM FORGE and connect to chyennne (you will need to setup remote launch settings. see CISL web documentation)
For more information, see CISL web documentation
Need to include "D" in case name to turn on debug mode. For example
./create_test SMS_D_Lm1.f09_f09_mg17_rMERIT.I2000Clm50SpMizGs.cheyenne_gnu.mizuroute-default -r . --no-build --queue premium --walltime 01:00:00
Copy env_mach_specific.xml.
cp env_mach_specific.xml env_mach_specific.xml.ddt
cp env_mach_specific.xml env_mach_specific.xml.orig
Open env_mach_specific.xml.ddt
After Line 31 add
<command name="load">arm-forge</command>
Replace this section (should be after <resource_limits> section)
<mpirun mpilib="default">
<executable>mpiexec_mpt</executable>
<arguments>
<arg name="labelstdout">-p "%g:"</arg>
<arg name="num_tasks"> -np {{ total_tasks }}</arg>
<arg name="zthreadplacement"> omplace -tm open64 </arg>
</arguments>
</mpirun>
<mpirun mpilib="mpt" queue="share">
<executable>mpirun `hostname`</executable>
<arguments>
<arg name="anum_tasks"> -np {{ total_tasks }}</arg>
<arg name="zthreadplacement"> omplace -tm open64 </arg>
</arguments>
</mpirun>
<mpirun comp_interface="nuopc" mpilib="default">
<executable>mpiexec_mpt</executable>
<arguments>
<arg name="labelstdout">-p "%g:"</arg>
<arg name="num_tasks"> -np {{ total_tasks }}</arg>
<arg name="zthreadplacement"> omplace -tm open64 -vv</arg>
</arguments>
</mpirun>
<mpirun comp_interface="nuopc" mpilib="mpt" queue="share">
<executable>mpirun `hostname`</executable>
<arguments>
<arg name="anum_tasks"> -np {{ total_tasks }}</arg>
</arguments>
</mpirun>
<mpirun mpilib="openmpi">
<executable>mpirun</executable>
<arguments>
<arg name="anum_tasks"> -np {{ total_tasks }}</arg>
</arguments>
</mpirun>
with
<mpirun mpilib="mpi-serial">
<executable>ddt --connect --cwd=$RUNDIR --no-mpi</executable>
<arguments>
<arg name="tasks_per_node"> --procs-per-node=$MAX_TASKS_PER_NODE</arg>
</arguments>
</mpirun>
<mpirun mpilib="default" threaded="false">
<executable>ddt --connect --cwd=$RUNDIR</executable>
<arguments>
<arg name="num_tasks"> --np $TOTALPES</arg>
<arg name="tasks_per_node"> --procs-per-node=$MAX_TASKS_PER_NODE</arg>
</arguments>
</mpirun>
<mpirun mpilib="default" threaded="true">
<executable>ddt --connect --cwd=$RUNDIR</executable>
<arguments>
<arg name="num_tasks"> --np {{ total_tasks }}</arg>
<arg name="tasks_per_node"> --procs-per-node=$MAX_TASKS_PER_NODE</arg>
<arg name="thread_count"> --openmp-threads={{ thread_count }}</arg>
</arguments>
</mpirun>
Finally, copy env_mach_specific.xml.ddt to env_mach_specific.xml before case setup, build, and submit.
cp env_mach_specific.xml.ddt env_mach_specific.xml
./case.setup --reset --keep env_mach_specific.xml
./case.build
./case.submit