GROMACS

De Wiki de Calcul Québec
Aller à : Navigation, rechercher
Cette page est une traduction de la page GROMACS et la traduction est complétée à 100 % et à jour.

Autres langues :anglais 100% • ‎français 100%

Sommaire

Description

GROMACS is a popular molecular dynamics package designed for biological macromolecules simulation. GROMACS offers high performances on x86, x86-64 and ia64 architectures thanks to its assembly-coded optimised routines that use the SSE, SSE2, 3DNow! and Altivec instruction sets.

Submit file examples

The following examples use GROMACS on Colosse, compiled in single precision (default option), with the Gnu and Intel compilers.


File : submit-gromacs-gcc.sh
#!/bin/bash
#PBS -S /bin/bash
#PBS -l nodes=8:ppn=8
#PBS -l walltime=24:00:00 [...] 
 
module load compilers/gcc/4.4.2 mpi/openmpi/1.4.3_gcc misc-libs/fftw/3.2.2_gcc_r1 apps/gromacs/4.5.4_gcc
 
mpirun mdrun_mpi



File : submit-gromacs-intel.sh
#!/bin/bash
#PBS -S /bin/bash
#PBS -l nodes=8:ppn=8
#PBS -l walltime=24:00:00 [...] 
 
module load compilers/intel/11.1.059 mpi/openmpi/1.4.3_intel blas-libs/mkl/10.2.2.025 apps/gromacs/4.5.4_intel
 
mpirun mdrun_mpi


In addition to the parallel program mdrun_mpi, standard GROMACS tools (non parallel versions) are included in the modules on Colosse. You can, for instance, compile your directives with grompp when preparing your experiments.


Performances

GROMACS 4.5.4 performances on Colosse are shown in the following graphs. The GMXBench 3.0 GROMACS benchmark was used, and three systems were tested:

  • PDDC, a phospholipids system,
  • LZM, hen egg white lyzozyme,
  • Poly-CH2, a carbon polymer.
Gmx-perf-en.png
Gmx-eff-en.png

For all systems tested here, the GCC executable offers better performances. The Intel executable, however, has an edge in serial tasks. For your experiments, we invite you to test both versions with a varied number of processes to optimise resource usage.

GROMACS performances are partially dependent on the Fast Fourier transform (FFT) routines used for calculating non-bonded interactions between atoms. The same is true of other molecular mechanics implementations. Many optimised FFT libraries are available on Calcul Québec supercomputers. For more details, refer to the list of modules installed on each of Calcul Québec's supercomputers .


Using GROMACS on Briarée

On the Briarée cluster at the Université de Montréal we've found that GROMACS 5.0 displays better performance when used in conjunction with HyperThreading, thus in taking advantage of all 24 logical cores of a compute node on Briarée. To use these additional logical cores you don't need to modify your script: you ask for 12 cores per node and then run GROMACS as usual with 12 MPI processes. These GROMACS MPI processes will create two OpenMP threads to use the two logical cores associated with each physical core whenever there is sufficient work. You are free to disable this hyperthreading by giving the environment variable OMP_NUM_THREADS the value 1 before running GROMACS in your script. We have also observed a clear performance improvement by ensuring that the MPI processes are pinned to a core, by using the option "-pin on" with GROMACS.

Outils personnels
Espaces de noms

Variantes
Actions
Navigation
Ressources de Calcul Québec
Outils
Partager