-
Notifications
You must be signed in to change notification settings - Fork 0
Default configuration (system wide and user specific)
The system default configuration can be found in AaronTools/config.ini
, and defines the built-in workflows available in AaronJr.
These built-in workflows can be "included" in your AaronJr configuration file and extended as necessary.
For example, one may use the workflow for TS optimizations (defined in the [Job.TS] section) and append an additional step for higher-level single-points by placing the following in their project's configuration file:
[Job]
include = TS
5 type = single-point
exec_type
option for the other steps.
# snippet from AaronTools/config.ini
[Job.CrestMinimum]
1 type=optimize.changes
1 exec_type = xtb
2 type=conformers
2 exec_type=crest
3 type=optimize
4 type=frequencies
# snippet from the project's configuration file
[Job]
include = CrestMinimum
exec_type = gaussian
You may find yourself defining the same options over and over again in your project's configuration files.
If so, defaults can be set in $AARONLIB/config.ini
.
These settings will be automatically used, unless overridden by settings in the project's configuration file.
Additionally, consider using functions to make this file more flexible.
# snippet from $AARONLIB/config.ini
[Job]
exec_type = gaussian
nodes = 1
procs = %{ $ppn * $nodes }
memory = %{ $procs * 2 }GB
ppn
) in the project's configuration.
Here is the content of the file I have saved as $AARONLIB/config.ini
with annotations in the form of # comments
.
# remote directory for saving computational input/output files
remote_dir = /home/%{$HPC:user}/chem/%{$project}/%{$name}
[Theory]
charge = 0
multiplicity = 1
method = b3lyp
basis = !tm 6-31G(d)
tm LANL2DZ
# options for relaxation of changes
1 method = PM6
1 basis =
# density fitting option ignored for hybrid functionals
denfit = 1
[Job]
exec_type = gaussian
memory = %{$procs*2}GB
# some software treats memory limits as mere suggestions
exec_memory = %{$procs*16//10}GB
procs = %{$ppn*$nodes}
nodes = 1
ppn = 12
wall = 12
# options for relaxation of changes
1 ppn = 2
1 wall = 2
[HPC]
user = <username for supercluster>
host = <supercluster's host name for submitting jobs>
transfer_host = <supercluster's host name for file transfer (only needed if different from `host`)>
scratch_dir = /scratch/%{$user}
queue_type = SLURM
queue = batch