Table summarizing properties of Calcul Québec servers

De Wiki de Calcul Québec
Aller à : Navigation, rechercher
Cette page est une traduction de la page Tableau résumé des propriétés des serveurs de Calcul Québec et la traduction est complétée à 100 % et à jour.

Autres langues :anglais 100% • ‎français 100%

This page lists properties of Calcul Québec's regular compute servers (multicore processors). For information about servers with accelerators (GPU or Xeon Phi), see this page.

Sommaire

Hardware

Briarée Briarée Contributed nodes Colosse Cottos Guillimin (phase 1) Guillimin (phase 2) Mp2 Ms2 Psi
Number of nodes 672 25 960 128 1200 382 1632 308 84
Processors Intel Xeon X5650 Westmere Intel Xeon E5-2670 v3 Haswell Intel Xeon X5560 Nehalem Intel Xeon E5472 Intel Xeon X5650 Westmere Intel Xeon E5-2670 Sandy Bridge (372), E5-4620 Sandy Bridge (2), E5-2650v2 Ivy Bridge (8) (nodes) AMD Opteron 6172 Intel Xeon E5462 Intel Xeon X5650 Westmere
Cores per CPU 6 12 4 4 6 8 12 4 6
CPU Frequency 2,67 GHz 2,3 GHz 2,8 GHz 3,0 GHz 2,67 GHz 2,6 GHz 2,1 GHz 2,8 GHz 2,67 GHz
CPU Cache 12 MB 30 MB 8 MB 12 MB 12 MB 20 MB 12 MB 12 MB 12 MB
Cores per node 12 24 8 8 12 16 24 8 12
Memory per node 24(~316), 48(~312) or 96(~42) GB(nodes) 256(~20), 512 (~5) GB (nodes) 24 or 48 GB 16 GB 24(400), 36(600) or 72(188) GB(nodes) 64(224), 128(144), 256(6), 384(1), 512(4) or 1024(1) GB(nodes) 32(1588), 256(20) or 512(2) GB (nodes) 16 or 32 GB 72 GB
Network InfiniBand QDR, non-blocking InfiniBand QDR, non-blocking InfiniBand QDR, non-blocking InfiniBand DDR, non-blocking InfiniBand QDR, half non-blocking, half with ratio 2:1 InfiniBand QDR, half non-blocking, half with ratio 2:1 InfiniBand QDR, non-blocking on 216 nodes, 3,5:1 on the rest InfiniBand DDR, non-blocking per group of 22 nodes (176 cores) Ethernet

Disk space

Briarée Colosse Cottos Guillimin Mp2 Ms2 Psi
Network storage (NFS) - System $HOME - $HOME
$ARCHIVE
$HOME
$ARCHIVE
$HOME
Parallel File System $HOME
(GPFS, 7.3 TB),
$SCRATCH
(GPFS, 219 TB)
$SCRATCH
(Lustre, 500 TB, 17 GB/s)
$RAP/$HOME
(Lustre, 500 TB, 12 GB/s)
$SCRATCH
(Lustre, 151 TB)
$HOME, /sb/project, /gs/project, $SCRATCH
(GPFS, 4 PB)
$PARALLEL_SCRATCH_MP2_... (Lustre, 393 TB and 291 TB) $PARALLEL_SCRATCH_MS2_... (Lustre, 156 TB) -
Local storage $LSCRATCH
(182 GB/node)
- $LSCRATCH
(745 GB/node)
$LSCRATCH
(343 GB/node)
$LSCRATCH
(1 TB/node)
$LSCRATCH
(438 GB/node)
/state/partition1
(522 GB/node)
RAM disks (up to half of the node total RAM) $RAMDISK $RAMDISK /dev/shm $RAMDISK $RAMDISK $RAMDISK /dev/shm

Performance

T: Theoretical; M: Measured

Briarée Colosse Cottos Guillimin Mp2 Ms2 Psi
Gflop/s per node(double precision) 128,16 (T) 89,6 (T) ; 89,54 (M) 96 (T) 128,16 (T) 201,6 (T) 89,6 (T) 128,16 (T)
Gflop/s per core 10,68 (T) 11,2 (T) ; 12,1 (M) 12 (T) 10,68 (T) 8,4 (T) 11,2 (T) 10,68 (T)
Memory bandwidth per node (GB/s) 64,4(T) 64,4(T) ; 35,22 (M) - 64,4(T) 51,2 (T) 25,6 (T) -
Memory bandwidth per core (GB/s) 5.37 (T) 8 (T) ; 4,4 (M) - 5.37 (T) 2,13 (T) 3,2 (T) -
InfiniBand network bandwidth per node (GB/s) 4 (T) 4 (T) ; 3,58(M) 2 (T) 4 (T) 4 (T) 2 (T) -
Gflops per node / InfiniBand network bandwidth per node 32,04 (T) 22,4 (T) ; 25,01 (M) 48 (T) 32,04 (T) 50,4 (T) 44,8 (T) -

Software

Briarée Colosse Cottos Guillimin Mp2 Ms2 Psi
Operating System Scientific Linux 6.3 CentOS 6.6 Linux CentOS 5 CentOS 6.5 Linux CentOS 6.5 Linux CentOS 5.5 Linux CentOS 5
Supported Shells Bash, tcsh, zsh Bash Bash, tcsh, zsh Bash Bash (défaut)
csh, tcsh
Bash (défaut)
csh, tcsh
Bash
C/C++/Fortran Compilers GCC, Intel GCC, Intel, PGI, SunStudio GCC, Intel GCC, Intel, PGI GCC, Intel, PGI, Pathscale GCC, Intel, PGI GCC, Intel
Compilers/languages (others) Cuda, Python, Java, Perl, Mono (C#), R, Ruby Go, Java, Python, Perl, Mono (C#), R, Ruby, Lua, Octave Python, Java, Perl, Mono (C#), R, Ruby Python, Java, Perl, Julia, R, Ruby, Octave Python, Java, Perl, Haskell, R Python, Java, Perl, Haskell, R Python, Java, Perl
MPI Implementations OpenMPI, MPICH2, MVAPICH2 OpenMPI, MVAPICH2 OpenMPI, MVAPICH, MVAPICH2 OpenMPI, MVAPICH2, Intel MPI OpenMPI, MPICH, MVAPICH, MVAPICH2 OpenMPI, MPICH, MVAPICH, MVAPICH2 OpenMPI, MPICH2
Modules Modules on Briarée Modules on Colosse Modules on Cottos Modules on Guillimin Modules on Mp2 Modules on Ms2 Psi does not use modules

Job Submission

Briarée Colosse Cottos Guillimin Mp2 Ms2 Psi
Scheduler Maui/Torque Moab/Torque Maui/Torque Moab/Torque Maui/Torque Maui/Torque Maui/Torque
Maximum wall time
(in general)
7 days 48 hours 7 days 30 days 5 days 5 days -
Maximum job size (in general) 2520 cores 32 nodes / 256 cores 384 cores 1560 cores depends on allocation depends on allocation -
Sharing nodes One user per node, many jobs possible One job per node One user per node, many jobs possible Shared between multiple users for serial jobs, one job per node otherwise One job per node One user per node, many jobs possible Shared between multiple users
Default Job Queue soumet (routing queue) - soumet (routing queue) metaq qwork qwork default
Other Job Queues normale, courte, hp, hpcourte, test, longue - normale, courte, hp, hpcourte, test lm, xlm2, scalemp, debug
Other attributes: westmere, sandybridge
qtest (1 h)
qfbb
qfat256 (256 GB)
qfat512 (512 GB; 3 days)
qtest (1 h)
qlong (30 days)
-
Outils personnels
Espaces de noms

Variantes
Actions
Navigation
Ressources de Calcul Québec
Outils
Partager