GPU Clusters list
This list is meant for the Machine Learning practitioners that need GPU accelerated computation facilities.
Cluster name | HPC center name | Tiers level | Hadware (per node) | Links |
HAL | IPSL/LATMOS/OVSQ/ | 3 | – 4 nodes: 2 Intel Xeon Silver 4215 (16c@2,5GHz each), 2 Nvidia RTX 2080 TI 11Go – 2 nodes: 2 Intel Xeon Silver 4210R (20c@2,4GHz each), 2 Nvidia RTX A5000 24Go | access conditions, doc |
Local data centers | 2 | Custom | site | |
CCIN2P3 GPU platforme | IN2P3 | 1 | – 10 nodes: 2 Xeon E5-2640v3 (8c@2.6 GHz), 128 Go RAM, 2 Nvidia Tesla K80 12 Go – 6 nodes: 2 Xeon Silver 4114 (10c@2.2 GHz), 192 Go RAM, 4 NVidia Tesla V100 32 Go | site, access conditions, doc |
Irene V100 | TGCC | 1 | 32 nodes: 2 Intel Cascade Lake (20c@2.1 GHz), 4 NVidia Tesla V100 16 Go | site, access conditions (GENCI), access (eDARI), doc |
Jean Zay GPU partition | IDRIS | 1 | – 351 nodes: 2 Intel Cascade Lake 6248 (20c@2.5 GHz), 192 Go RAM, 4 NVidia Tesla V100 16 Go – 261 nodes: 2 Intel Cascade Lake 6248 (20c@2.5 GHz), 192 Go RAM, 4 NVidia Tesla V100 32 Go – 20 nodes: 2 Intel Cascade Lake 6226 (12c@2.7 GHz), 384 Go RAM, 8 NVidia Tesla V100 32 Go – 11 nodes: 2 Intel Cascade Lake 6226 (12c@2.7 GHz), 768 Go RAM, 8 NVidia Tesla V100 32 Go – 3 nodes: Intel Cascade Lake 6240R (24c@2.4 GHz), 768 Go RAM, 8 NVidia A100 PCIE 40 Go – 52 nodes: 2 AMD Milan EPYC 7543 (32c@3.3 GHz), 512 Go RAM, 8 NVidia A100 SXM4 80 Go | site, access conditions (GENCI), access (eDARI), doc |
PRACE | European centers | 0 | Custom | site, deep learning best practice |