

On the requested resources (#nodes, #cores, and #GPUs). To get good parallel performance, GROMACS must be launched differently depending Optimally partition the force field terms between the GPU(s) and the CPUĬores, depending on the number of GPUs and CPU cores and their respective Therefore, -cpus-per-task should be a multiple of -gpus.Īlways check in the log file that the correct number of GPUs is indeed Running on 1 or more GPUs, by default GROMACS will:ĭetect the number of available GPUs, create 1 thread-MPI thread for each GPU,Īnd evenly divide the available CPU cores between the GPUs using OpenMP Gmx_mpi: use option -ntomp (not -ntmpi or -nt), and set number Gmx: use option -nt to let GROMACS determine optimal numbers of OpenMP The number of threads must always be specified, as GROMACS sets it incorrectly on Hydra: Gmx_mpi: for multi-node jobs: must be used with srun, only supports Gmx: recommended for all single-node jobs, supports both OpenMP threads There are two variants of the GROMACS executable: Incompatible with process-based MPI models such as OpenMPI Thread-MPI threads: MPI-based threading model implemented as part of GROMACS, GROMACS supports two threading models, which can be used together: The recommended version of Stata in batch mode is stata-se,īecause it can handle the larger datasets. p=$ is anĮnvironment variable that contains the number of cores allocated to your job.

This job and automatically passing this setting to g16 with the option Additionally, we are requesting 20 cores for In this case we are running a g16Ĭalculation with 80GB of memory ( -m=80GB), but requesting a total of 20 *ĥGB = 100GB of memory (25% more). The following job script is an example to be used for Gaussian calculations Gaussian directive %nprocshared to the top of the input file. Number of cores used by Gaussian G16 with the option g16 -p or by adding the If any of your GaussianĬalculations is not using all available cores, it is possible to force the total Will report the actual use of resources of your jobs. However, it is known to under-perform in some circumstances and Gaussian G16 should automatically manage the available resources and That is at least 20% larger than the memory value defined in the calculation. Therefore, it is recommended to submit Gaussian jobs requesting a total memory

%mem in the input file or with g16 -m in the execution command. Gaussian jobs can use significantly more memory than the value specified by Modules, such as Gaussian/G16.B.01, should be used if you need any of their We recommend using the module Gaussian/G16.A.03-intel-2017b for general useīecause its performance has been thoroughly optimized for Hydra.
