This is an old revision of the document!


NAMD on Caviness

Open MPI Slurm job submission script should be used for NAMD jobs on Caviness and can be found in /opt/templates/slurm/generic/mpi/openmpi. Copy and edit the template based on your job requirements by following the comments described in the openmpi.qs file.

$ vpkg_versions namd
 
Available versions in package (* = default version):
 
[/opt/shared/valet/2.1/etc/namd.vpkg_yaml]
namd        Scalable Molecular Dynamics
  2.12      Version 2.12
* 2.13      Version 2.13
  2.13:gpu  Version 2.13 (with CUDA support)

The * version is loaded by default when using vpkg_require namd. Make sure you select a GPU variant of the namd package if you plan to use GPUs by using vpkg_require named:gpu

In your job script, you should use the following when you call namd

namd2 +idlepoll +p${SLURM_CPUS_ON_NODE} +devices ${CUDA_VISIBLE_DEVICES} ...

Documentation for namd indicates "+idlepoll" must always be used for runs using CUDA devices. Slurm sets CUDA_VISIBLE_DEVICES to the device indices your job was granted, and SLURM_CPUS_ON_NODE to the number of CPUs granted to you.

It is always a good idea to periodically check if the templates in /opt/templates/slurm have changed especially as we learn more about what works well on a particular cluster.
  • software/namd/caviness.1567114759.txt.gz
  • Last modified: 2019-08-29 17:39
  • by anita